HUMAN AND MACHINE CREATIVITY: SOCIAL AND ETHICAL ASPECTS OF THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE

. The paper has a review character, and the presented analysis is based on theoretical considerations referring to the works of other authors. The aim of the paper is to draw attention to the importance of human creativity in the context of technology development, with special emphasis on artificial intelligence. For the purpose of exploration, the study applies philosophical methods, especially methods typical of ethical reflections, also supported by the analysis of existing data derived from social sciences, especially contemporary sociology. The study is synthetic in nature and includes theoretical considerations concerning several issues. Positive and creative possibilities of using artificial intelligence in social and economic life were shown. Potential threats that may be associated with the inappropriate use of artificial intelligence – robots and information systems were also identified. Potential threats resulting from too much trust of people in algorithms were shown. Attention is focused on social and ethical aspects of the human-machine relationship, with special emphasis on the dimension of pragmatism, trust and fascination with new technologies, as well as the principles of robot ethics. A significant part of the considerations also refers to the effects of automation processes, including the functioning of the labour market, human creative abilities and appropriate competences. The third part of the study indicates still undeveloped research fields related to artificial intelligence. The conducted analysis may indicate the direction for further sociological and philosophical research that considers the specificity of the artificial intelligence functioning and seeks support in interdisciplinary research teams.

relationships in the context of changes on the labour market and the growing importance of civilizational competences of the users. Serious acceleration of changes resulting from the impact of these processes means that a man as a single individual with specific cognitive abilities cannot control all technological advances. The necessity to acquire new skills to operate even little complicated devices or to use the help of professionals, leads to various consequences in the form of further social problems and frustration. On the other hand, it is observed that creative solutions, new companies, new products as well as new jobs and professions are emerging.
Since the development of modern technologies became revolutionary in nature, it has become an even more exciting and extremely difficult research topic. Economy is the social sphere which makes the most of new technological applications, given the optimistic presumption of the other, i.e., cultural and political determinants. Financial profit is the main motivation of business practitioners, who are more and more often confronting the challenges related to the creation of prototype solutions in cooperation with scientists. In certain situations, such coordination of efforts can sometimes cause considerable problems that may result in profoundly serious social consequences.
It is possible to introduce technical elements that heal and equip our bodies with additional skills, for example they can expand the possibility to acquire new information, allow for listening to sounds of lower or higher frequency that the human ear typically cannot hear, improve visual acuity, or equip human bodies with the ability to perform hard work in difficult conditions. It seems that these aspects of the human-machine relationship are positive and do not raise any serious objections. The full connection of the human brain with the computer seems to be a matter of the near future. What consequences could this bring? We can imagine in a black scenario that the artifacts of new technologies will be misused by people, for example to gain an advantage over competitors in accessing goods and services. This will become the cause of a new form of social injustice in competition for prestigious jobs or a higher social position (Bostrom, 2014).
The analysis of the highly creative process in systems with AI can be found in the literature. The fact that such systems cannot yet compete with humans because they do not have an information database that can be compared with a given person's common-sense database is an interesting conclusion of Russian researchers concerning the creative abilities of AI. However, the Authors argue that basically there are no insurmountable obstacles to AI, and it will be able to compete with humans in the future (Elkhova & Kudryashev, 2017).
As other researchers emphasise: "The main task of a sociologist is to reconstruct facts and unveil hidden mechanisms that establish a causal relation between certain actions and certain consequences" (Campa, 2015, p. 12).
Undoubtedly, social sciences need both the research on human-machine relations and theoretical considerations focused on the social and ethical aspects of these issues. Social researchers should draw attention to the social implications of the consequences of applying engineers' achievements, as they have greater sensitivity and professional preparation when recognising how they change the world and people's lifestyles.
Appreciating the previous analyses, it is worth trying to supplement them with new problem areas that provide a chance to expand the applied approach to perceiving human creativity in the context of technological development, as well as ethical and social aspects of these phenomena. Therefore, the purpose of the paper is the theoretical analysis of the human-machine relationship in the context of technology development, with special emphasis on the complex situations that are revealed in contemporary relations between human and machine equipped with AI. The issue is so important to users that it cannot be left to computer scientists and engineers alone. Looking at the development of technology from the perspective of social sciences and humanities, it seems relevant to consider the issue of possible consequences that result from technological progress, analyse the change in human attitudes towards artifacts of technology and pay attention to the need to develop new information technology (IT), social and ethical competences of users of new technologies.

Methodological approach
The multidimensional nature of the issues related to human-machine relations determines the methodological approach and scientific methods. This caused a further interdisciplinary synthesis of the achievements of the social, humanistic and natural sciences by borrowing scientific achievements and engaging cybernetics. The scientists began to use quantitative methods to study certain social and humanistic aspects, which resulted in a new, more rigorous understanding of common concepts and methods. The humanities apply the methodology of studying ideas, values, beliefs, stereotypes, and practices related to specific social and cultural phenomena. They are located on the border of science, social perception of technology and popular culture. In this context, the concept of social imaginarium seems interesting, as it is consistent with a specific cultural narrative (Wojewoda, 2020).
For the purpose of exploration, the study applies philosophical methods, especially methods typical of ethical reflections, also supported by the analysis of existing data derived from social sciences, especially contemporary sociology.
By considering the creativity of man and machine, and the effects of these phenomena, a methodological freedom is allowed; similarly to Alvin Toffler we are guided by the idea of synthesis: "Without fully grasping the whole phenomenon, we cannot understand the clash of powerful forces in the modern world; like survivors, we are trying to navigate, without a compass or a map, among the dangerous reefs during a storm. In a highly specialized culture, where everyone is engrossed in a meticulous analysis of scattered and huge amount of data, synthesis is not only useful, but even crucial" (1986, p. 24).
The starting point for such a methodologically constructed analysis is an attempt to combine the selected problematic issues: 1) social and ethical aspects of the human-machine relationship -with special emphasis on the dimension of pragmatism, trust and fascination with new technologies and the principles of robot ethics, 2) the effects of automation processes, creative capabilities of human being and having appropriate competences, 3) research needs emerging as conclusions from the previously conducted considerations on the still undeveloped research fields related to AI. In our view, the empirical use of the proposed trends may inspire not only researchers in the field of social sciences and humanities, but also sensitize researchers in the field of sciences.

Human-machine relationship: discussion around pragmatism, trust and fascination with ethical robots
Social recognition for new technologies takes place within the framework of the axiological structure that is known to us. In the area of basic ("lower") values, in a positive scenario, intelligent machines help to live comfortably, healthily and longer, when maintaining the appropriate quality of life. The problem occurs on the level of "higher" values, related to the ability of an individual to decide about themselves. It is assumed that freedom is an important value for people; it constitutes the basis for making choices in the area of goods, services, workplace, and life strategies. Creativity of human thinking and acting is founded on the possibility of making independent choices. If we were to resign from freedom in favour of the comfortable and safe use of technical artifacts, the creativity of action should be understood not as inventing new things, but as creative imitation, adaptation, as well as searching for more and more convenient ways of use, and choice in what an intelligent machine suggests (McIlwraith et al., 2017). The question is whether it means the loss of freedom. It seems it does. At the level of higher values, the possibilities of digital immortality and recording the individual consciousness of a human being as information recorded in a digital form also occurs. The question is whether it is the understanding of immortality that we expect.
The social perception of AI largely depends on its image in popular culture, including film productions. In such films as Blade Runner (1982, directed by Ridley Scott), Blade Runner 2049 (2017, directed by Denis Villeneuve), Ex Machina (2014, directed by Alex Garland), I, Robot (2004, directed by Alex Proyas), we have a futuristic vision of the world in which people and robots function side by side. In cinematic images, humanoid robots have desires similar to humans, such as the desire for freedom, feeling human emotions, the right to personal distinctness, or the domination of some individuals over others, exceptionally the less intelligent. Adding typically human desires to machines is an anthropomorphization errorassigning human characteristics to things. We associate the desires above with intelligence in a general sense, forgetting that they concern human intelligence. We have no grounds to believe that these desires also apply to robots equipped with AI or information systems (cf. Waytz et al., 2010). Attributing human qualities to intelligent robots creates a fear of technological advances. The technical interpretation of the role of AI differs from that inherent in popular culture. The technological perspective is geared towards the creative and constructive use of AI artifacts. The purpose of this reflection is to look for ways to complement human action, i.e., augmentation. Here, however, a question arises that we should ask -are we able to protect ourselves against the misuse of AI?
The notion of "AI" means intelligence that develops in an engineering process on silicon compounds (in silico). The achievements of neurology, mathematics, cognitive psychology, cognitive science and philosophy of technology are used in the process of its creation. John McCarthy, the creator of the notion of AI, associated AI with a machine whose operation is like human intelligence. In the basic version, it is about creating mathematical and logical models that can be used in computer programs, and in the advanced version, it is about creating self-learning programs based on models of neural and associative networks that allow for machine learning unsupervised by man (Rich & Knight, 1990).
AI is the result of human creation and can be considered in terms of the appropriate or inappropriate use of devices and operating systems. Artifacts of AI are not competitors to human intelligence; we do not create them to let them show an "artificial will" to control the environment. Concerns about the loss of a dominant position of humans in the world come from the assumption that the development of intelligence is based on an anthropological model. We are afraid that a new species will emerge (technoevolution), which, apart from intelligence, will have human desires related to emotional life, we are afraid of such robot's feelings as anger, suspicion, and the need to control others. Such a scheme of the development of AI is illegitimate; it is rather the transfer of human problems to the world of machines. For now, people should not be afraid of intelligent robots, but of the irresponsible use of these robots by humans. The ethics of AI users remains a major issue (Bostrom, 2014). The positive role of research on AI consists in linking engineering problems related to smart machines with human brain dysfunctions, especially those on the autism spectrum. On the one hand, it is about teaching human behaviour to the machines, and on the other hand, it is about empowering autistic brains to focus more attention and exercise social skills. Humanoid robots created for therapeutic purposes are an example here (Ratajczak et al., 2021).
AI can be creative because of the development of solutions that humans have not created so far. The old Chinese game of "go", i.e., the creation of a computer program called AlphaGo that uses the principles of AI operation can be an example here. It was created by Google and it made it possible to win the game. AI not only won with the Chinese and Korean masters of this game but was able to create new solutions that had not been known to the experts of this game (Silver et al., 2016).
When analysing the issue of AI in relation to our fears and potential threats, several areas can be indicated. They concern, among others, the fact that because of some technological error, robotic machines will threaten us, or someone who has power over the machines will decide to use them to discipline (e.g., street protests) or to persecute political enemies (elimination of opponents of power). Here, appropriate legal rules should be created to prevent the inappropriate use of robots (McIlwraith et al., 2017).
The convergence between the world of machines and people takes place in both directions. We construct robots that are to resemble and imitate humans. However, we do not expect full assimilation; robots are supposed to resemble "servants", we do not expect that they will be equal partners for us, or even less someone (something) that will take control of our lives. Stories about the threats posed by robots taking control over the world are part of film narratives, but the actual robotization has little in common with them. It is difficult to stop the development of technology for fear that someone will misuse artifacts of cybertechnology. Now we perceive robots as useful tools, with more information processing power (big data), precision and reliability of operation (medical robots) than humans (Nawrat, 2012). We want robots to perform the tasks that are burdensome for us (e.g., caring for chronically ill and elderly people). The latter issue can also be considered in terms of threats, but they do not concern technological devices, but changes in the model of social sensitivity in high-income societies, where the model of moral obligations towards old and sick people is changing. Instead of taking personal care of old parents, an adult child will buy them a robot that will perform the tasks of a caregiver, nurse and life companion.
It seems that the greater danger comes from human fascination with the automation of behaviour. Man has been impressed with the effectiveness of machines since the 19th century. In their management models, institutions expect employees to have similar operational reliability. Especially when you must make difficult choices, when each solution is burdened with potential bad consequences, for example, when we need to decide who to save in the first place, or how to establish rules of fair distribution of goods among those in need, or establish the rules for access to rare goods, for example related to organ transplantation. Maybe it would be better to rely on AI in such situations? The issue of technologization of life is related to the use of algorithms. Algorithms concern the work of machines but are also related to the functioning of human communities. Algorithmizing human behaviour leads to the situation that we stop practicing certain skills. Since we can trust AI, we can stop thinking. We tend to think about machine algorithms in terms of strong valuation, judging them unequivocally as good or bad. A rational approach, however, suggests that even if we perceive algorithms as necessary, the attitude of limited trust towards them still needs to be maintained. Accidents involving autonomous vehicles indicate that the lack of attention from the person in the vehicle, to what was happening on the road, and transferring full control of the vehicle to the car computer was their direct cause. The computer failed to recognize a danger on the road and caused an accident. It could have been avoided if the driver had controlled the car's driving route.
By analogy to this situation, it can be indicated that excessive trust in algorithms (algorithmic procedures operating in organizations) may result in the disappearance of responsible thinking and acting within the institution. Otherwise, moral responsibility, which is associated with reflection on the possible negative consequences of our actions and eliminating potential threats, may be reduced to procedural responsibility, limited to the performance of specific tasks, and nothing more (Lorenzi & Berrebi, 2017). However, this leads to passivity of behaviour and lowers creativity in people's thinking. In this way, we abandon using intuition in action, which is one of the key human skills.
Human behaviours are so unpredictable that so far it has been impossible to express them in the form of algorithms. Announcements made by companies working on the creation of a robot-machine equipped with AI and identical with human, stimulate our imagination and curiosity, as well as they arouse constantly fuelled fears. However, so far, they have seemed exaggerated. The disappointments resulting from the operation of AI come from the fact that devices equipped with it are not able to ensure safe use, whether in relation to people in the vehicle (it is difficult to refer to them as drivers) and people exposed to the impact of autonomous cars or machines. Human being is required to have intuition, and the ability to act beyond the scheme. Can people expect similar skills from a robot (Cave et al., 2019)? For example, when drivers hear children playing football nearby, they should slow down, even though there is no sign informing about the fact. Using algorithms in such a situation is difficult, as is the algorithmic approach to unconventional behaviour of animals on the road. Here we are faced with a dilemma, whether to work on more computing power of AI or work on the creation of machine artificial intuition? The potential consequences of this should be considered. A car driven by AI would have to know that in certain circumstances it is necessary to break the rules and drive unconventionally; for example, a traffic accident, a fallen tree can change the situation on the road. This type of creative approach to the rules is expected from a human, but should it be expected from a machine? Due to the fear of the unsupervised operation of machines, man creates robots partially, and not fully automatic.
The potential possibilities of applying AI, in medicine and judiciary among others, are enormous. However, a lot depends on whether we will be able to use its resources reasonably and not to trust computer algorithms absolutely, as well as if it is possible to protect the security of our personal data. Interestingly, AI can be racially, sexually, and ethnically prejudiced. The algorithms seem to be neutral in this respect. Some information systems used in the judiciary and medical institutions show that "prejudices" are also included in algorithms and concern racial and gender bias. However, these are not the prejudices of the very machines, but the kind of information that people have typed into IT resources. An AI test named Tay, developed by Microsoft is an example of a failed experiment. It was disabled just a day after its launch due to very controversial Twitter entries. Within 24 hours, while basing on specific virtual reality data, Tay began to pretend to be a teenager, underwent a metamorphosis from a laid-back character loving people to a robot telling indiscriminate jokes and supporting Adolf Hitler on exterminating Jews. It developed only based on contacts with real people (Forbes, 2016).
Several concerns related to the loss of human freedom, constant control, and thus views and attitudes, by creating a specific user and consumer profile arise here. For example, using guidance techniques, algorithms act as a guide for users visiting subsequent websites, and we do not even know when it happens or which website it will lead us to. Or when the system checks and collects the private information we send, in accordance with the security policy adopted by the authorities (Cave et al., 2019).
Such situations raise ethical concerns and indicate the need for research that would enable the creation of criteria for assessing whether a given AI model is people-friendly and trustworthy.
Since the times of Asimov (1993), ideas to create ethical robots, i.e., friendly, understandable and safe for humans, have been occurring. However, it is difficult to conclude that the set of norms proposed by Asimov solves all human moral dilemmas.
The ethics of AI has two aspects -the first concerns the principles of using devices equipped with AI by a moral entity, a human or an institution, whereas the other concerns the very robots, when they become moral entities. In the first case, we are dealing with the principles of technology ethics. Among various solutions proposed within this trend, attention should be paid to the concept of ethics of responsibility (Jonas, 1984), which postulates expanding the scope of legal responsibility (post factum) with the rules of moral responsibility (pro factum) and proposes the need to consider the consequences resulting from the introduction of technological devices to general use. By treating AI as a tool, we focus on a human being as a user of robots. In this case, the autonomy is gradual. If an individual allows the car computer to drive, it does not mean that in this case the person's responsibility ceases. Similarly, the people who drive autonomous military devices (drones and armoured vehicles) are responsible for them. Unsupervised operation of machines equipped with AI concerns the gradual reduction of human control over these devices, i.e., six stages (cruise control -parking assistance, not using legs, not using hands, not using eyes, AI takes full control of the vehicle, not using the mind). So far, the stage of unsupervised by man, fully automatic machine operation is a matter of the distant future. The current work of engineers focusses on functionality and safety related to the use of machines by humans (cf. Fry, 2018).
When we treat robots as moral entities, the situation changes radically. The ethical principles we know are anthropocentric. They assume that the entity has human "nature" and human moral awareness. Will the "thinking" robots that can make autonomous decision, perceived as representatives of the new species, understand moral situations in the same way as humans, or differently? Will they be able to follow the principles of moral responsibility? So far, there have been more questions than answers here. The issue of possible machine failures must be considered. So far, it has not been possible to create the ways of functioning of AI that would allow us to understand and properly interpret the diversity of human axiological preferences and the complexity of moral situations. Recognizing intelligent machines as moral entities, so far has been a vision of possible scenarios for the future (Cave et al., 2019).
The concept of phonetic robots that will have practical knowledge allowing AI to adapt to changing circumstances, considering algorithmic accumulation of experiences, associative knowledge and information exchange between humans and robots, is one of the interesting attempts to create ethical rules for robots (Polak & Krzanowski, 2020). The issue of creating ethical rules for robots will soon arise great interest of engineers, philosophers and people managing enterprises and the state.
Work on the "ethical robot" is accompanied by changes in the attitudes of enterprises. Some of them develop various codes of ethics, also covering the issues of using new technologies or introducing innovations as well as the effects of these processes. The production of specialized machines leads to changes in the labour market. This especially concerns those professions in which human activities can be algorithmized, when schematic action based on procedures is expected, for example when it comes to activities related to the operation of less specialized devices, but recently also to legal, accounting and logistics services, etc. However, this leads to losing jobs by people, because the machine is a cheaper worker than a human. Companies solely focused on the economic principle of profit maximization will replace people with machines. This can result in mass unemployment. It is indicated that due to robotization, the demand for workers in the food processing industry, administrative employees, and money circulation will decrease in Poland (Łapiński et al., 2013). These types of changes relate to breakthrough stages of civilization. This requires a creative attitude from the representatives of various economic sectors and politicians to alleviate the crisis in the labour market. New forms of work, professions and employment must be created. It is also related to the model of education implemented in schools and universities and by the very employers, who would have a real impact on changes in this area through appropriately exerted pressure. Promoting the concept of corporate digital responsibility is a special form of development of employees' awareness. This new trend within the framework of previously known social responsibility represents the awareness of the obligation of organizations working for the development of technology and using technology to provide services. The attitude of digital responsibility promotes balance and leading digital progress in such a way that technology positively affects the environment (Suchacka, 2019).

Automation processes, creativity and civilizational competences
Creativity -and hence -human involvement is undoubtedly associated with having appropriate competences, which are a factor directly determining the appropriate level of employee effectiveness in the organizational system. They relate to practical actions in specific situations and organizational contexts. Competences have the quality of being updated in the work process and, as a dynamic structure, they evolve under the influence of changes taking place in the economy and human life (Walkowiak, 2004, p. 20).
In the dynamically changing labour market, the so-called digital competences, primarily related to computer literacy and using the Internet have been increasingly important for effective functioning of the workers in an organisation, which is especially experienced by older people. This will be a crucial determinant for the marginalization of individuals on the labour market and their pauperization. Having digital competences lowers the level of fear of constant changes, both in contemporary organizations and in their environment. Thus, it seems reasonable to formulate the thesis that the willingness to accept changes occurring both at the organizational level and in the socio-economic dimension is growing with the increase in the level of key civilizational competences. Such an individual is more open to creative solutions. Unfortunately -the deficit regarding the use of digital technologies is a key determinant of the exclusion of an individual not only in the labour market (greater risk of unemployment or low-paid jobs), but also a factor of social marginalization, e.g., due to difficult access to services in which on-line registration is required (Muster, 2010, p. 49).
Thus, we can talk about the phenomenon of growing digital gap, i.e., the division into people using and not using the Internet. Based on empirical research, it can be concluded that in Poland, 85% of the employed and 69% of the unemployed Poles use the Internet. Moreover, it is especially important from the perspective of the functioning of an individual on the labour market that Internet users have a greater chance of finding a job, and they also continue to be employed more often than people who do not use the Internet (Batorski, 2015, pp. 187-188). Unfortunately -Eurostat surveys indicate that Poland is low in the European rankings of digital competences (Głomb, 2020, pp. 29-30).
Automation and robotization undoubtedly imply an increase in production while reducing the demand for labour resources. However, it should be emphasized that the technological development observed in recent decades did not result in a significant increase in unemployment rates. It is related to the continuous emergence of new professions and specialisations. On the other hand, the increase in unemployment is to a much greater extent related to the cyclical recession, which was clearly observed during the recent economic crisis. Over the period of several months -from December, 2008 to December, 2009, unemployment in Poland increased from 9.5% to 12.1% (Główny Urząd Statystyczny, 2021).
In the longer term -between 2002 and 2019, the unemployment rate in Poland decreased from 20% to 5.2% (Główny Urząd Statystyczny, 2021). Moreover, the literature on the subject emphasizes that "in the case of Poland, also in the EU and the euro area there was an increase in the value of the European innovation index" (Błachowicz, 2019, p. 13).
This may be the consequence of the currently observed industrial revolution, which is probably most visible in the IT industry. The innovative software produced by this sector will have a greater impact on replacing human teams, also in the sphere of mental work, and not, as before, primarily in physical work (Błachowicz, 2019, p. 10;Kośmicki & Malinowska, 2015, p. 6).
The question is whether the labour market will be able to keep pace with the dynamic transformation of labour relations through science (Afeltowicz, 2007, p. 122). It is difficult to find a clear answer to such a question. In this case, we are moving around two paradigms. The first highlights the problem of potential technological unemployment of a previously unknown scale that is expected in the future. In fact, what will be observed includes "the end of work and the post-market era, which is obviously associated with a number of unprecedented negative consequences" (Afeltowicz, 2007, p. 122).
In the case of massive technological unemployment related to the progressive automation of the processes of goods and services production, guaranteeing an unconditional basic income for a large part of people of working age and managing the natural tendency of a person to be active in the professional sphere, will pose the challenge for government agencies. However, in the second case -which is emphasized in the literature on the subject: "if the supporters of the compensation theory are right, then because of continuous innovations, the participants in the labour market will be doomed to a permanent sense of risk and life-long-learning, i.e., constant retraining and training" (Afeltowicz, 2007, p. 122).
We can find different estimates concerning the expected loss of jobs due to the progressive automation and robotization. Michael Osborne and Carl Benedikt Frey formulated the thesis that the potential to automate a given profession, and thus the threat of its disappearance, depends on the level of its routine. In their opinion, automation is primarily a threat to those professions in which employees perform repetitive manual activities, while automation is the least likely to threaten those professions in which creativity, negotiation skills or the need to interact with people count. These researchers formulated a controversial thesis that almost half of all professions may disappear because of the progressing automation processes (Platforma Przemysłu Przyszłości, 2020). Their research was criticized by the scientific circles. There were arguments that the study of the processes of automatization of specific professions should be conducted at the level of analysis of the potential of automation of activities assigned to a given profession (Platforma Przemysłu Przyszłości, 2020).
Moreover, the situation in the labour market will be more and more affected by the global COVID-19 pandemic. The threat of the epidemic rapidly influenced the development of remote work tools with the use of modern IT solutions and the increasingly widespread prevalence of remote work. This activated unprecedented amounts of human creativity. The pandemic situation forced to use, discover and modify previously little known, although functioning, IT solutions. Due to the reduction of costs by employers (e.g., limiting office space), it can be expected that employers will increasingly more often offer their employees the opportunity to work from home, also after the pandemic is over. Thus, the lack of computer skills and the inability to use the Internet will limit the possibilities of performing work and may affect the loss of employment and subsequent problems of an individual in the process of reintegration into the labour market. There is no doubt that the nature of work will increasingly require the involvement of the senses, i.e., work will have a sensory dimension. The ongoing epidemic threat and the awareness of further epidemic threats that may occur and result in the need to suspend production temporarily, as well as the efforts to reduce labour costs will determine the intensification of work of research and development departments on further automation and minimization of human participation in the production process. It will also give rise to new socio-ethical problems and the need of a comprehensive analysis of the new phenomena related to technological development by researchers in the field of social sciences and humanities.

Perspectives for further research on human creativity and its relationship with new technologies
When analysing the potential trends of research on human creativity in the context of the development of new technologies, especially AI, several possible topics to be studied by interdisciplinary teams arise: -The threat of dehumanization: the functioning of advanced technologies -especially AI, often causes concerns, but they are related to sophisticated futuristic visions rather than to the actual threat posed by uncontrolled technologization and uncritical acceptance of the achievements of the fourth revolution. This brings the risk of losing the ability to reflect deeper. Thinking in terms of linking facts and drawing conclusions will be a special skill for a narrow group of people and machines. This may have consequences in changes in the social structure, social positions of individual members of society, and even in granting social status to machines; -Anomy in the sphere of social and interpersonal relations: this is a topic like the previous one, but with greater emphasis on interpersonal relations. Contact with a living person may soon prove to be a huge stress and one of the most sought-after skills. The behaviour of machines will be predictable, but of humans -still unpredictable.
Perhaps new types of bonds of human, intelligent machine / machines and social group character will emerge, which will have psychological and social consequences for the individual; -Surveillance: the spectre of the Big Brother -the ways of using information about people and specific individuals should be considered. Political regimes can use certain information if only they install the appropriate programs. Doubts can arise about the activities the data will be needed for. Objections can be related to the potential possibilities of manipulating social groups and single individuals, or even taking over governments and power over the entire state. On the other hand, AI methods will allow for controlling citizens in extreme situations, as was the case with the COVID-10 pandemic. This may allow not only to directly limit the spread of such diseases, but by controlling the habits and lifestyle of specific social groups, it may help in the search for an effective panacea against such threats; -Education: with the common opinion that young people are much better at dealing with new technologies, the very education system raises doubts. It is intended to sensitize children and young people to the dangers of unverified contacts in the virtual world. It should also pay attention to the Internet addiction, as well as what is especially important, the possibility of manipulating behaviour through specific commands from the network. Development of practical skills to live outside the virtual world, for example to cope with power outage should be emphasised in education; -The problem of digitally excluded people: humanity does not develop at the same pace everywhere, and therefore a global approach to this issue may raise several questions. The differences between rich and poor countries may not be as important as their level of "networking" and technologization. Several threats posed by AI will not affect less developed countries or regions; -The labour market and digital social responsibility of companies: changes in the labour market caused by the introduction of new technologies will have a tremendous impact on the disappearance of certain professions and the emergence of new ones, which, contrary to what can be expected, will not be related to the level of necessary education, but rather to the possibility of technologizing the performance of a specific job. A radiologist, who will be able to base his work to a large extent on the mechanical reading of data by International System of Units (SI), which in turn will only lead to the necessary control of the obtained results can be an example here; -AI in various areas of law: instability or even internal contradiction of legal systems is the problem, however, AI provides the opportunity to search databases for legal solutions in simple legal problems, which may be applied in less complicated advice. The issue of treating complex SI systems such as unmanned vehicles or highly automated factories, where most of the work is done by machines seems more confusing. Doubts may be raised, for example, by the issues of liability in critical situations, as well as tax law or social security problems; -Futuristic vision of the war between robots and humans: the wrong path access error is enough, one error and humanity has no chance in this fight. This topic could be implemented with the explicit collaboration of computer scientists who would be able to identify how often errors in creating AI algorithms occur and what the practical consequences are. On the other hand, this subject area offers a lot of opportunities for cultural scientists, literary scholars, philosophers and art representatives. A possible symbiosis with the AI world and its possible variants with more or less dramatic effects, which Fukuyama (2008) wrote about when he outlined his vision of the end of man could be the opposite of the topic formulated in this way. The above-mentioned topics constitute extremely broad research areas. Once again, it should be emphasised that even though they have their sources in sociological intuitions, they also include other sciences. The Authors' attempt to indicate potential research trends is only an outline and does not consider several questions, or necessary research and methodological approaches. The issue of human creativity in the ethical and social aspects of human relationship with intelligent machines remains a cognitive, gradually filled gap, which is a challenge for interdisciplinary researchers.

Conclusions
The dynamic technological development gives mankind a chance to introduce many improvements to of everyday life. The level of technological complexity has social significance as it translates into new types of interpersonal relations, new social phenomena, threats and new directions for further changes. Some of them are still hard to predict. This raises many ethical doubts and questions that have never been asked before. On the one hand, automation and robotization are associated with an increase in production, but at the same time, contrary to a popular belief, they do not significantly affect the increase in unemployment rates. Instead, the professions and the demand on the labour market, as well as the nature of the competences being developed, are changing.
The conducted deliberations aimed to indicate the discussions taking place around socioethical aspects, as well as real phenomena that have a real practical dimension. Mutual relationships and correlations imply further new topics that can be addressed in social research in various areas. However, an interdisciplinary approach synthesizing diverse knowledge areas is recommended.