75 billion smart devices connected with each other will be used in our homes and offices by 2025. They will be making decisions on their own without communicating with us or the cloud.
If we want to use these well-connected devices and let them make decisions, we must make sure that they are ethically secure and using the secured AI and machine learning operations for us.
Code of conduct is not just enough to make these devices safe for use. Industries involved, need to make sure that the structure of their systems is the safest and ethical decisions are made. They also require physical intervention if the system fails to obey the ethical code to conduct.
As an Internet of Things (IOT) is growing and artificial Intelligence is becoming the key component of computing. Artificial Intelligence ethics is becoming a key issue to be addressed. Stats show that over 750 million AI chips are sold in 2020. Their processing power is increasing and they are now part of smartphones, security cameras, thermostats, and many other smart devices. These systems are getting smatter through machine learning and their dependency on the internet for decision making is reducing.
Investing in reliable and safe AI/ML systems depends on the design and development and collaboration with humans in a comprehensive way. It is very important to implement privacy and security at the beginning of the system development. They can not be implemented at the later stage of the system development.
These systems require the highest level of security implemented on every level of the development phase in both software and hardware level systems must be capable of processing input data. It is noticed that advanced cryptography solutions are being used in these days.
Hardware security will play an important role to prevent AI/ML based system attacks to exploit sensitive data from secure systems. Devices with sophisticated data must be equipped with security measures to counter such attacks.
The accountability of these systems right now is not consistent. AI ecosystem is a contribution of different creators. So, to make these creators accountable is not yet possible until all the creators are on one platform and they make a comprehensive code to conduct for the AL/ML systems.
A tiny vulnerability can collapse the whole AI ecosystem.