top of page
Writer's pictureByron Gatt

Artificial Intelligence - Some Thoughts

Updated: Jun 1

The world has changed a great deal in the last decade. There seems to be a bleakness that sets upon it whenever anyone in public life speaks. Those aware are angry but not in great numbers. Those unaware are angry but they don't know why. The idea of democracy seems to be in trouble.


With the horrible nature of our current human leaders, there is a strange warmth in the coldness of statistical modelling. Don't be turned off, it gets more interesting I promise. The idea that we can look at cold numbers to tell a truth is comforting in a world of spin.

There is a problem though, interpretation needs to be processed and decided on by humans. The solutions are often ideologically based and focussed on garnering votes or likes. Humans who touch it will massage it to fit their narrative.


The problem is clear; humans. This is the issue in all good decision-making, humans have a bias to self servitude. How then do we overcome this problem?


SCIENCE!


I am a technology nerd and love the advancement it brings. Since auto pilot has been introduce there has been a 40% reduction in accidents among Tesla cars. Now there is a chance this change with Elon’s eradic nature and it gets worse, for now its better, This proves that data driven decisions are a better commander of vehicles than humans.


Why not allow data, and by extension AI, to make decision on how to govern; making policy with the cold warmth of statistical modelling. Impartial minds, not bound by human bias. If it can save peoples lives in cars, why not on capital hill.


We could call it - Artificial Intelligent Government (AIG).


Here are 3 thoughts and challenges about this:


1. For Now a Glimpse of the Future.


Technology has advanced beyond what many people can comprehend. AI is involved in many parts of society but always at the behest of a human, for now. We are always watching and improving it because the technology is not ready for isolated operation, as far as I'm aware. But I dont think we are far off.


There are many great minds working on this problem but for now AIG is a future idea. In saying this we should think now, how will it work? Will we allow it to be an advisor of rulers? Will we have a government responding to its outputs or a team of data scientist tending to it?


For democracy it asks the question, will we be involved? Should we? Perhaps we should votie on its proposed outcomes. Could we veto a decision? In that case how long before we put so much spin and regulation  AIG is only an idea once more?


We are not in position to implement AIG but when we do how will we work with it. How will the technophobes react to it?


There are many questions, and I think even if AIG never eventuates they need to be answered. AI will continue to become more involved in society and decision-making. Unless governments put restrictions on its use and influence, the rich will be the beneficiaries. They are the ones building the base structure of our future AI models.


AIG and AI have an incredible promise but like so many advancement, are easily hijacked. Automation was meant to allow us to be more free, but it has taken away jobs and lined the pockets of the wealthy.


The question really for AIG, and technology as a whole, is do we want true equality or status quo?


2. Writing the Rules Without Bias.


This I think is the greatest challenge to getting it right. The bias which presents itself, or hides, with determine if AIGs is a success or failure. There are two elements where I think there could be bias leakage. From the scientists and from those commissioning it.


Scientists are viewed as impartial observers of the universe. For the most part it is true but it is not an absolute. Scientists have a bias as to what they believe is most valuable and the right approach. It would not be a shock to think that the data scientists writing the code could bias it towards scientific achievement as an example. This could be either consciously or without their conscious awareness.


The code would be written to direct humanity in the best possible direction. The direction is where bias would come in. If advancement is key, then we would see more decisions favouring science. If human happiness is key then it may be a shift away from that. Whatever direction the AIG is pointed, it will be unwavering in its pursuit.


The second most likely bias input is from those who are commissioning it. Broadly that would be the public but more specifically it would be the politicians. They would instruct the data scientist of the democratically agreed direction. In truth it would be the direction which they believed the public want.


You see there are many issues that have majority support in the public. With this support politicians still choose not to enact it into law. Partly because of belief and partly because of outside forces (money).


It’s conceivable to think that if an AIG was implemented today it would be in a different direction to what the democracy wanted. The bias of the cheque book could pull it in a completely different direction.


There are ways to move around this, one possibility is the AIG checking with the public for support and factoring this in. Again the amount this is factored in would drive the final decision. Not to mention the public has believed some pretty disasterous things in the past.


How much democracy do we want or can we handle?


3. Would it Understand Human Suffering?


Governments all over the world, since ancient times, have not taken human suffering into account. AIG should follow the better than human test. That is it should not make decisions that would cause more suffering than its human counterparts.


Any decision past this point is an improvement over our current position. The further removed from that line the less people suffer, this would be a good thing.


There are many people who would divorce the two. Any suffering caused by AIG would be considered evil because of its source. Human suffering is par for the course, but something else causing it is unacceptable. This in itself shows the irrational and bias towards the nature of humans. We would want more suffering at the hands of humans than less suffering at the hands of an algorithms.


The Tesla autopilot success is an example. Every crash there is a news report: IS IT SAFE. Many people say robots shouldn't control cars. Yet if you look at the numbers, the robots are doing much better. Perhaps we feel an ownership on human disadvantage.


There also lies a problem in how we define suffering. In the developed world the bar for suffering is much lower than the developing. Would we instruct AIG to keep this bias in its decision-making? If not would the developed world not resent the redistribution of wealth from them to the truly suffering?


As can be seen in our current political discourse people do not like to share. The ideas of freedom are usually a ruse for the wealthy to make more money. You have the freedom of choice in principle, not so much in reality with the lack of a bank balance. AIG could change this because it is not swayed by money. Frankly as long as it gets electricity, it has all it needs.


A purely objective look at society would help us break away from the misinformation politics that we think is unique to our age but really is a human element. AIG would solve many problems of bias, but would be built by biassed people.


Ideally the AIG would be self learning, but like a child its building blocks of morality would come from its parents. The human race would need an incredibly brilliant mind to weave around this problem. I am not sure that this would happen but hope for it.



 

Democracy in principle is a beautiful and precious thing. It does require participation, learned people making educated decisions. In practice we have uneducated, not just in a scholastic sense but a general sense, making decision based on emotion and myth. AIG would do away with the one thing that harms democracy and all other political systems, people.


AI is the wave of the future, it is my hope that it becomes more than just my personal assistant. There are horror stories of the future and AIs propensity for evil. We see this because we look at human intelligence and it is a natural leap. My hope is that AI is a different intelligence, a better less human intelligence.


That we have peace in the world seems only likely if we are not responsible for its decisions. AIG would allow us to focus on more human pursuits while it focusses on the intelligence of governing.


Byron



2 views0 comments

Recent Posts

See All

Comments


bottom of page