Is product design only about how humans interacts with it?
Color is a beautiful thing that creates different emotions in humans. We see things and differentiate similar objects with the use of color. We feel colors as an object that creates different emotions when seen. Colors produced in the visual system of the human brain. In real life, the colors do not exist. We create colors using our brains, which means colors stay subjective in nature and, not objective.
In design, the color act as a key function that grabs the attention of the user. Color is the easiest aspect to remember when come to encountering new things for the users.The colors of the design always make connection with branding of the product. The product designers use color as a way to communicate what the product is about. Most of the time user’s purchase ideas largely depends on color. There are some facts that quite important when come to color psychology.
Fact 01 : Color preferences go along with the gender difference
As for the findings by Joe Hallock, it says that there is a significant difference between the preference between genders when coming to color selection. The study was done for most favorable and least favorable colors and there was a significant favor for the blue color of both men and women and the orange color was the most disliked color by both men and women. In this study, it was found that men preferred Bold colors and women preferred soft colors.
These details of findings make you understand when coming to product design why designers tend to use the blue color as much as they can in their applications and use less orange color. But it is always good to use colors that would not give support just to the likability of the users but improve the quality and the behavior of the users.
Fact 02 : The use of color depends on product or service
When coming to application design most of the people look into color before making the purchase. For an example, G-shock wrist watches are really famous for their hard use and the durability. When the users go to the G-shock website they feel that exact feeling of trust towards what the wristwatch stands out to be.
The color brings up the personality of a user when come to use the applications. Here you can see that G-shock is using bold colors that would easily grab the attention of the people who like to ware cool things instead of being extremely professional.
Fact 03 : Color makes the product recognizable
The product design is not just about being understandable but also discoverable. Our brains always like to focus on brands that are immediately recognizable. In order to create the product look engaging and recognizable, you have to use the colors properly that align with your business ideas, personality, emotion and differs from your competitors. Many studies have shown that color is a key fact when come to deal with direct competitors. The use of color is really common in the food and restaurant industry. where designers use the appearance of the product heavily based on unique colors. If we take Mcdonalds, KFC, Starbucks and other famous dine in places who have a number of branches oversees focus heavily on their unique colors and designs.
Important thing is to understand and focus on the customer reaction towards the colors rather than focusing the colors itself. Your color should achieve the goal of what you are trying to give to the customers.
How colors impact the design?
There are many articles that you may able to find how the colors impact on design. In my study, i have found following examples to show how the color physiology has impacted the design.
Blue is one of the most commonly used colors when coming to product design. The blue color is considered to give emotions such as trust, safe and relaxation.
Blue has different color shades into it that create a different set of emotions with it. The light blue color creates the emotions such as calm and makes the user feels refreshed. The blue color is also associated with happiness. Normally the clear blue sky gives that feeling of happiness and friendliness towards the user. By using the fact of being friendliness the trust is created towards the user.
The pink color is a color which is related to candy and sugary items. Most of the time it is called a “Girl’s color” by many people. The color pink is not as feminine as you think. It is a color of playfulness and Joy.
Black is one of the most desired colors in the spectrum. The black color represents power and formality. The black color is considered to be the strongest color of the spectrum. Black fonts have been there from the black and white age until the electronic age due to its ability to create a proper emotion of power over other colors.
People often use the word “Everything is cool with black” since it bring power with the color.
Red is a color that gives us the sense of importance and notifies to us about a danger as well. Red color often associates in design in places where the user should pay special attention. For example, in traffic lights, we show the color red as an indication to stop crossing or to stop the vehicle moving forward. At the same time, the color red is taken as a symbol of love and passion. But most of the time red is used in places where the user needs immediate attention.
Green color for obvious reasons, humans find it a color that is connected to the environment, trees and plants. Most of the time organizations that sell organic food and beverages use the green color to their application. Since this color is so natural to our eyes it grabs attention when properly used.
We always consider color has everything to do with the branding but not with the application user’s emotion. But we can clearly understand that we can use colors to trigger different emotions and use this to help gain the upper hand in direct competition as well. The knowledge in color psychology to understand the misconceptions regarding colors such as
There are ugly colors and there are beautiful colors
Colors Are Naturally Organised Along a Color Circle
Humans See Color
Color Preferences are Strictly Personal
It also helps the designers to understand that there is no universal color that is called the best color to be used in the design. We should always focus on who we are designing for and get their ideas and feedback in the early stage of the design process to create a design that is more supportive to give a better user experience.
The Internet is growing exponentially, and so is the Web platform we create. Often though we fail to reflect on the greater picture of connectivity and contexts the audience of our work might find themselves in. Even a short glance at the state of the World Wide Web shows that we haven’t been building with empathy, situation variability awareness, let alone performance in mind.
So, what is the state of the Web today?
Only 46% of 7.4 billion people on this planet have access to the Internet. The average network speed caps at unimpressive 7Mb/s. More importantly, 93% of Internet users are going online through mobile devices — it becomes inexcusable to not cater to handhelds. Often data is more expensive than we’d assume — it could take anywhere from an hour to 13 hours to purchase 500MB packet (Germany versus Brazil; for more intriguing stats on connectivity head to Ben Schwarz’sBeyond the Bubble: The Real World Performance).
As technologists often we find ourselves in the position of privilege. With up-to-date, high-end laptops, phones and fast cable Internet connection, it becomes all too easy to forget this isn’t the case for everyone (actually, it’s true for very few).
If we’re building the web platform from the standpoint of privilege and lack of empathy, it results in exclusionary, subpar experiences.
How can we do better by designing and developing with performance in mind?
Optimising all assets
One of the most powerful, but under-utilised ways to significantly improve performance starts with understanding how the browser analyses and serves assets. It turns out that browsers are pretty great at discovering resources while parsing and determining their priority on the fly. Here’s where the critical request comes in.
A request is critical if it contains assets that are necessary to render the content within the users’ viewport.
With <link rel='preload'> we’re able to manually force assets’ priority to High ensuring that desired content will be rendered on time. This technique can yield significant improvements in Time to Interactive metric, making optimal user experience possible.
🛠 To track how well you’re doing on prioritising requests use Lighthouse performance tool and Critical Request Chains audit or check the Request Priority under Network tab in Chrome Developer Tools.
📝 General performance checklist
Prioritise critical assets
Use content delivery networks
Images often account for most of a web page’s transferred payload, which is why imagery optimisation can yield the biggest performance improvements. There are many existing strategies and tools to aid us in removing extra bytes, but the first question to ask is: “Is this image essential to convey the message and effect I’m after?”. If it’s possible to eliminate it, not only we’re saving bandwidth, but also requests.
In some cases, similar results can be achieved with different technologies. CSS has a range of properties for art direction, such as shadows, gradients, animations or shapes allowing us to swap assets for appropriately styled DOM elements.
Choosing the right format
If it’s not possible to remove an asset, it’s important to determine what format will be appropriate. The initial choice falls between vector and raster graphics:
Vector: resolution independent, usually significantly smaller in size. Perfect for logos, iconography and simple assets comprising of basic shapes (lines, polygons, circles and points).
Raster: offers much more detailed results. Ideal for photographs.
After making this decision, there are a fair bit of formats to choose from: JPEG, GIF, PNG–8, PNG–24, or newest formats such as WEBP or JPEG-XR. With such an abundance of options, how do we ensure we’re picking the right one? Here’s a basic way of finding the best format to go with:
JPEG: imagery with many colours (e.g. photos)
PNG–8: imagery with a few colours
PNG–24: imagery with partial transparency
GIF: animated imagery
Photoshop can optimise each of these on export through various settings such as decreasing quality, reducing noise or number of colours. Ensure that designers are aware of performance practices and can prepare the right type of asset with the right optimisation presets. If you’d like to know more how to develop images, head to Lara Hogan’s invaluable Designing for Performance.
Experimenting with new formats
There are a few newer players in the spectrum of image formats, namely WebP, JPEG 2000 and JPEG-XR. All of them are developed by browser vendors: WebP by Google, JPEG 2000 by Apple and JPEG-XR by Microsoft.
WebP is easily the most popular contender, supporting both lossless and lossy compression, which makes it incredibly versatile. Lossless WebP is 26% smaller than PNGs and 25–34% smaller than JPGs. With 74% browser support it can safely be used with fallback, introducing up to 1/3 savings in transferred bytes. JPGs and PNGs can be converted to WebP in Photoshop and other image processing apps as well as through command line interfaces (brew install webp).
Even using incredibly efficient image formats doesn’t warrant skipping post-processing optimisation. This step is crucial.
If you’ve chosen SVGs, which are usually relatively small in size, they too have to be compressed. SVGO is a command line tool that can swiftly optimise SVGs through stripping unnecessary metadata. Alternatively, use SVGOMG by Jake Archibald if you prefer a web interface or are limited by your operating system. Because SVG is an XML-based format, it can also be subject to GZIP compression on the server side.
ImageOptim is an excellent choice for most of the other image types. Comprising of pngcrush, pngquant, MozJPEG, Google Zopfli and more, it bundles a bunch of great tools in a comprehensive, Open Source package. Available as a Mac OS app, command line interface and Sketch plugin, ImageOptim can be easily implemented into an existing workflow. For those on Linux or Windows, most of the CLIs ImageOptim relies on can be used on your platform.
If you’re inclined to try emerging encoders, earlier this year, Google released Guetzli — an Open Source algorithm stemming from their learnings with WebP and Zopfli. Guetzli is supposed to produce up to 35% smaller JPEGs than any other available method of compression. The only downside: slow processing times (a minute of CPU per megapixel).
When choosing tools make sure they produce desired results and fit into teams’ workflow. Ideally, automate the process of optimisation, so no imagery slips through the cracks unoptimised.
Responsible and responsive imagery
A decade ago we might have gotten away with one resolution to serve all, but the landscape of ever-changing, responsive web is very different today. That’s why we have to take extra care in implementing visual resources we’ve so carefully optimised and ensuring they cater for the variety of viewports and devices. Fortunately, thanks to Responsive Images Community Group we’re perfectly equipped to do so with picture element and srcset attribute (both have 85%+ support).
The srcset attribute
Srcset works best in the resolution switching scenario—when we want to display imagery based on users’ screen density and size. Based on a set of predefined rules in srcset and sizes attributes the browser will pick the best image to serve accordingly to the viewport. This technique can bring great bandwidth and request savings, especially for the mobile audience.
The picture element
Picture element and the media attribute are designed to make art direction easy. By providing different sources for varying conditions (tested via media-queries), we’re able always able to keep the most important elements of imagery in the spotlight, no matter the resolution.
The last step of our journey to performant visuals is delivery. All assets can benefit from using a content delivery network, but there are specific tools targeting imagery, such as Cloudinary or imgx. The benefit of using those services goes further than reducing traffic on your servers and significantly decreasing response latency.
CDNs can take out a lot of complexity from serving responsive, optimised assets on image-heavy sites. The offerings differ (and so does the pricing) but most handle resizing, cropping and determining which format is best to serve to your customers based on devices and browsers. Even more than that — they compress, detect pixel density, watermark, recognise faces and allow post-processing. With these powerful features and ability to append parameters to URLs serving user-centric imagery becomes a breeze.
📝 Image performance checklist
Choose the right format
Use vector wherever possible
Reduce the quality if change is unnoticeable
Experiment with new formats
Optimise with tools and algorithms
Learn about srcset and picture
Use an image CDN
Optimising web fonts
The ability to use custom fonts is an incredibly powerful design tool. But with power comes a lot of responsibility. With whooping 68% of websites leveraging web fontsthis type of asset is one of the biggest performance offenders (easily averaging 100KB, depending on the number of variants and typefaces).
Even when weight isn’t the most major issue, Flash of Invisible Text (FOIT) is. FOIT occurs when web fonts are still loading or failed to be fetched, which results in an empty page and thus, inaccessible content. It might be worth it to carefully examine whether we need web fonts in the first place. If so, there are a few strategies to help us mitigate the negative effect on performance.
Choosing the right format
There are four web font formats: EOT, TTF, WOFF and more recent WOFF2. TTF and WOFF are most widely adopted, boasting over 90% browser support. Depending on the support you’re targeting it’s most likely safe to serve WOFF2 and fall back to WOFF for older browsers. The advantage of using WOFF2 is a set of custom preprocessing and compression algorithms (like Brotli) resulting in approx. 30% file-size reduction and improved parsing capabilities.
When defining the sources of web fonts in @font-face use the format() hint to specify which format should be utilised.
Audit font selection
No matter whether self-hosting or not, the number of typefaces, font weights and styles will significantly affect your performance budgets.
Ideally, we can get away with one typeface featuring normal and bold stylistic variants. If you’re not sure how to make font selection choices refer to Lara Hogan’s Weighing Aesthetics and Performance.
Use Unicode-range subsetting
Unicode-range subsetting allows splitting large fonts into smaller sets. It’s a relatively advanced strategy but might bring significant savings especially when targeting Asian languages (did you know Chinese fonts average at 20,000 glyphs?). The first step is to limit the font to the necessary language set, such as Latin, Greek or Cyrillic. If a web font is required only for logotype usage, it’s worth it to use Unicode-range descriptor to pick specific characters.
The Filament Group released an Open Source command-line utility, glyph hanger that can generate a list of necessary glyphs based on files or URLs. Alternatively, the web-based Font Squirrel Web Font Generator offers advanced subsetting and optimisation options. If using Google Fonts or Typekit choosing a language subset is built into the typeface picker interface, making basic subsetting easier.
Establish a font loading strategy
Fonts are render-blocking — because the browser has to build both the DOM and CSSOM first; web fonts won’t be downloaded before they’re used in a CSS selector that matches an existing node. This behaviour significantly delays text rendering, often causing the Flash of Invisible Text (FOIT) mentioned before. FOIT is even more pronounced on slower networks and mobile devices.
Implementing a font loading strategy prevents users from not being able to access your content. Often, opting for Flash of Unstyled Text (FOUT) is the easiest and most effective solution.
Analysing parse and compile times becomes crucial to understanding when apps are ready to be interacted with. These timings vary significantly based the hardware capabilities of users’ device. Parse and compile can easily be 2–5x times higher on lower end mobiles. Addy’s research confirms that on an average phone an app will take 16 seconds to reach an interactive state, compared to 8 seconds on a desktop. It’s crucial to analyse these metrics, and fortunately, we can do so through Chrome DevTools.
The way modern package managers work can easily obscure the number and the size of dependencies. webpack-bundle-analyzer and Bundle Buddy are great, visual tools helping identify code duplication, biggest performance offenders and outdated, unnecessary dependencies.
We can make imported package cost even more visible with Import Cost extension in VS Code and Atom.
Implement code splitting
Webpack, one of the most popular module bundlers, comes with code splitting support. Most straightforward code splitting can be implemented per page (home.js for a landing page, contact.js for a contact page, etc.), but Webpack offers few advanced strategies like dynamic imports or lazy loading that might be worth looking into as well.
Consider framework alternatives
Get rid of unnecessary dependencies
Implement code splitting
Consider framework alternatives
Tracking performance and the road forward
We’ve talked about several strategies that in most cases will yield positive changes to the user experience of products we’re building. Performance can be a tricky beast though, and it’s necessary to track the long-term results of our tweaks.
User-centric performance metrics
Great performance metrics aim to be as close to portraying user experience as possible. Long established onLoad, onContentLoaded or SpeedIndex tell us very little about how soon sites can be interacted with. When focusing only on the delivery of assets, it’s not easy to quantify perceived performance. Fortunately, there are a few timings that paint quite a comprehensive picture of how soon content is both visible and interactive.
Those metrics are First Paint, First Meaningful Paint, Visually Complete and Time to Interactive.
First Paint: the browser changed from a white screen to the first visual change.
First Meaningful Paint: text, images and major items are viewable.
Visually Complete: all content in the viewport is visible.
These timings directly correspond to what the users see therefore make excellent candidates for tracking. If possible, record all, otherwise pick one or two to have a better understanding of perceived performance. It’s worth keeping an eye on other metrics as well, especially the number of bytes (optimised and unpacked) we’re sending.
Setting performance budgets
All of these figures might quickly become confusing and cumbersome to understand. Without actionable goals and targets, it’s easy to lose track of what we’re trying to achieve. A good few years ago Tim Kadlec wrote about the concept of performance budgets.
Unfortunately, there’s no magical formula for setting them. Often performance budgets boil down to competitive analysis and product goals, which are unique to each business.
When setting budgets, it’s important to aim for a noticeable difference, which usually equals to at least 20% improvement. Experiment and iterate on your budgets, leveraging Lara Hogan’s Approach New Designs with a Performance Budget as a reference.
Monitoring performance shouldn’t be manual. There are quite a few powerful tools offering comprehensive reporting.
Google Lighthouse is an Open Source project auditing performance, accessibility, progressive web apps, and more. It’s possible to use Lighthouse in the command line or as just recently, directly in Chrome Developer Tools.
For continuous tracking opt-in for Calibre that offers performance budgets, device emulation, distributed monitoring and many other features that are impossible to get without carefully building your own performance suite.
Wherever you’re tracking, make sure to make the data transparent and accessible to the entire team, or in smaller organisations, the whole business.
Performance is a shared responsibility, which spans further than developer teams — we’re all accountable for the user experience we’re creating, no matter role or title.
It’s incredibly important to advocate for speed and establish collaboration processes to catch possible bottlenecks as early as product decisions or design phases.
Building performance awareness and empathy
Caring about performance isn’t only a business goal (but if you need to sell it through sales statistics do so with PWA Stats). It’s about fundamental empathy and putting the best interest of the users first.
This time we’ll jump to the next level by focusing on advanced RoR resources, perfect for those who have vast experience under their belt and feel comfortable in Ruby on Rails environment: Ruby on Rails Books for experienced developers