Senior brand designer who builds things. I design identity systems, campaign frameworks, and production pipelines — and increasingly, the tools I use to deliver them. Six years in-house at Uncommon Goods and Jonathan Adler. Now working at the intersection of brand craft and AI.











Self-initiated work at the edge of what brand designers are supposed to know how to build.
A browser tab that looks like a mirror until you summon a plasma orb between your hands.
An AI-powered operating system for brand engagements — custom intake, live brand mixer, auto-generated proposals, and smart asset delivery. Built because the standard process was broken.
Scalable creative systems and campaign execution for a design-driven e-commerce platform — 2M+ customers, one cohesive brand voice across every touchpoint.




At Uncommon Goods — a Brooklyn-based e-commerce company serving 2M+ customers through a curated selection of unique, sustainable products — I work across the full spectrum of marketing and product needs. My role spans strategic direction and hands-on production: from defining campaign concepts and pitch frameworks to delivering final assets across email, social, web, and packaging. The volume and cadence of work requires systematized thinking, not bespoke solutions.








I structure and produce marketing campaigns that balance performance goals against the playful, editorial voice that defines Uncommon Goods — working in close collaboration with design and creative directors.
My approach establishes a typography-first visual hierarchy optimized for product benefits, personalization, and gifting sentiment. Each campaign cycle informs the next—we test and refine creative iteratively with marketing, using performance data to sharpen decisions while preserving creative intuition.

Story of You is a customizable keepsake book that guides customers through creating a personal memoir. As lead designer, I partnered with merchandising, engineering, and product management to define the visual and editorial framework for a 100+ page book — navigating print production constraints, cross-team dependencies, and multiple proofing cycles. The product generated $117k in its first months post-launch, with a follow-up already in development.







I concepted, pitched, and designed a modular gift packaging system in collaboration with senior leadership — three sizes, a custom pocket for gift certificates, and sustainable materials aligned with brand commitments. The system prioritized scalability and material efficiency over bespoke production.


First creative hire for a new in-house department — I built design infrastructure, campaign systems, and brand standards from scratch for an iconic luxury home décor brand.



I was the first hire for the in-house Brand Creative team at Jonathan Adler — a new department charged with unifying and elevating brand visuals and voice across all customer touchpoints.
Jonathan Adler is a luxury home décor brand built on maximalist design, bold patterns, and irreverent humor — operating across e-commerce, retail, wholesale, and editorial channels, each with distinct production requirements. Joining as the senior designer of a department that didn't yet exist, I built systems and processes from the ground up while maintaining the high bar of craft the brand demands.
Alongside a weekly marketing production schedule, I worked with the department director to establish repeatable design workflows, documentation, and quality standards. Over three years, each seasonal cycle refined the system — what started as ad-hoc production evolved into a scalable framework that grew the team from one designer to several.
As the team grew, I moved from individual contributor to design lead — mentoring junior designers while continuing to own the highest-visibility projects and the systems beneath them.
I established a repeatable campaign framework for promotional marketing — one of the highest-volume, highest-visibility parts of my role. Each campaign's creative ships across email, SMS, web, social media, and retail print for all JA stores—requiring a system flexible enough to adapt across channels without losing cohesion.
My approach defined mini brand identities for each campaign—drawing from product catalog imagery to build cohesive visual systems. Typographic consistency became the structural anchor, creating space for bold color, movement, and decorative flourishes within the brand's maximalist language.


Selection of seasonal promotional campaigns spanning 2022–2025
I designed in-store signage and retail graphics for all Jonathan Adler locations — standardizing a system from window displays to interior wayfinding across multiple store formats.


Each season, I collaborated with Jonathan himself, leadership, and the Brand Creative Director to produce a fully custom product catalog. These 70+ page books reach 500,000+ customers quarterly — plus a holiday drop — with each edition building on the editorial structure and production learnings of the last.


I collaborated with senior leadership to consolidate and clarify the B2B program into a cohesive print catalog and dedicated landing page. The challenge was translating a consumer-facing luxury brand into B2B-appropriate messaging without diluting the visual identity — resulting in increased partnerships and new client acquisition.


End-to-end brand identity and application system for an AI-native communications agency — from logo and visual language through collateral and usage guidelines built to scale from launch.
Wild Signal is an AI-native communications agency founded by experienced industry leaders, operating at the intersection of data-driven insight and creative execution. The company needed a complete visual identity that could establish credibility with enterprise clients from day one.
I developed the full identity system: logo, iconography, typography, color palette, and comprehensive brand guidelines. Working directly with both founders, I structured the visual language to be flexible enough for pitch decks today and scalable enough for a growing team and client roster over time.
Beyond the core identity, I produced launch-ready marketing assets—LinkedIn presence, pitch materials, and client-facing collateral for early engagements already on their roster.
The constraint was clear: the mark needed to reference the initials WS without feeling literal or stiff. The brief called for trust and innovation with a subtle sense of playfulness—a visual shorthand for the agency's ability to simplify complex data and bring clarity to chaos.
These first-round concepts explore different structural interpretations of that idea, testing shape, personality, and flexibility to arrive at a mark that could function as a compact icon, an extended pattern, or a standalone symbol.
Pulse/radio wave, Scribble, direct messages, both S and W present in mark
Pulse/radio wave, Scribble, direct messages, both S and W present in mark
"W" and "S" present in mark, pen-like strokes, can be used as an abstract font / extended background pattern
Arrow directing to specific point, abstract "W," snowflake imagery represents uniqueness of insights
Growing radio/signal wave, paint marks, scribbled "W" and "S," can be made into hand-drawn pattern
Growing radio/signal wave, paint marks, abstract "W"
Organic shape, growth, abstract "w"
Arch signal element, connecting journeys, "S", puzzle pieces coming together to make a clear image






The Wayfinder is a secondary brand element that distills the Wild Signal identity into a compact, versatile mark. Designed to adapt over time, it serves as a navigational icon, badge, and standalone symbol across brand touchpoints—giving the system a flexible secondary element independent of the primary logo.

The completed brand kit includes comprehensive guidelines for logo usage, typography hierarchy, color applications, and iconography. The system was structured so that non-designers on the team could apply it correctly without art direction—prioritizing consistency and self-service over bespoke production.
The landing page translates the brand system into a marketing environment — establishing credibility with enterprise prospects while preserving the visual identity's distinctiveness. Typography, color, and layout reinforce the same principles defined in the guidelines.
Visual language design for an AI-driven children's storytelling platform — building an illustration system that became the generative foundation for the entire product experience.

Lalou is an early-stage storytelling platform that lets families create personalized bedtime stories for their children. The product sits at the intersection of handcrafted illustration and AI-generated content — each story produces custom illustrated scenes and personalized voiceovers through ElevenLabs.
My role was to define the visual language that would underpin every surface of the product. Working from an initial palette and typography direction, I developed a complete illustration system — character proportions, line weight rules, color application logic, and compositional patterns — that gave the brand its recognizable visual identity.
A critical design constraint: the app generates custom illustrated scenes per story using AI. I authored the style parameters and visual rules so that AI-generated output would feel like a natural extension of the handcrafted originals — not a departure from them. This meant designing not just illustrations, but a reproducible visual system.
I explored a wide range of illustration directions to find a balance between expressive, childlike spontaneity and the clean, modular sensibility a digital product requires. Early sketches tested variations in line weight, proportions, and styling to define a repeatable structure flexible enough for story prompts, character moments, and UI surfaces.

.jpg)
.jpg)
.jpg)
With the visual direction approved, I translated the chosen style into a complete illustration set — each asset refined for consistency, clarity, and usability across the product ecosystem. The final suite balances personality with functional simplicity, supporting both in-app moments and broader brand storytelling.





Once the illustration language was established, I brought it into early product mockups to test how the system would operate within the app. The screens pair simple, intuitive UI with playful visual moments—optimizing for clarity for parents and engagement for children.
The AI voice generation layer adds personalization without interface complexity. The design decision was to keep the technology invisible—users experience the result, not the mechanism.
The landing page extends the brand identity into a marketing environment structured around conversion and trust. It introduces the platform's purpose, surfaces the AI-driven storytelling approach, and guides users through core features. Layout, typography, and illustration work together to make a product that involves AI feel approachable rather than technical.

Brand identity and a custom-built live mixer for an AI-native immigration platform — so the client could explore directions in real time instead of waiting on a PDF.
Perebel is an immigration case management platform built by MM Labs. The software runs the operating layer of a small immigration firm, from intake and onboarding through document processing, billing, and case outcome. Two audiences live inside the same product: lawyers managing a caseload, and immigrants going through one of the highest-stakes moments of their lives.
The brief asked for a mark that could hold both registers at once: trustworthy enough for an attorney reviewing a case file, and warm enough for a client reading a status update. I worked through the metaphor outward from a single question: what does this product do for the person using it?
The turnaround was tight (four weeks from kickoff to incubator deadline), so I built a live mixer to speed up the conversation. Instead of exporting eight palette and type combinations into a PDF, the client could open a URL and swap directions in real time: cycle through palettes, pair them with different logo marks, try the full system in light and dark modes, and see it all applied to a marketing hero and a product card.
It replaced the usual deck review with something closer to design-in-public. The embed below is a preserved early version showing the full range of directions we tested before locking the system.
Sketches filled a few pages before anything digital happened. From there, I grouped the strongest directions into four conceptual families, each testing a different metaphor for the work Perebel supports. The final mark came out of Group A: the arched passage, a threshold between one life and the next.
Group A · Passage / Threshold. Arched doorways and gates, the movement from one status to another. The direction that became the final mark.
Group C · Document / Analog. Folded paper, dog-eared pages, the physical weight of a file. A nod to the paperwork that runs through every case.
Group D · Annotation / Signature. Handwritten P forms, reading as a margin note or a signed endorsement. Personal, but harder to scale.
Group E · Destination / Pin. Map pins and clover-leaf arrivals, framing the outcome instead of the journey. Read too literally, cut early.
The wordmark is set in EB Garamond, a serif with enough formality for a legal workspace and enough warmth for a client portal. The same letterforms translate into a lockup and a standalone icon, giving the team three marks they can deploy without making design decisions: wordmark for headers and primary surfaces, lockup for signatures and co-branding, icon for favicons, app tiles, and tight layouts.






Warm, paint-and-ink, built to signal someone understands the weight of the work. Three neutrals anchor the system (cordovan, cream, sand) with six chromatic supporting tones that carry emotion through client-facing surfaces. Every usable pairing was checked for WCAG AA contrast so the same palette can drive both the lawyer register and the client register without additional decisions.
General Sans is the working surface — UI labels, dashboard fields, body copy in the product. It leans a touch more modern than the serif lets on, which is what holds the client-facing register without breaking the system.
EB Garamond reads as legal, sober, unmistakably professional. The kind of serif a traditional attorney audience expects to see on a document they're about to sign.
Both faces had to clear two practical hurdles: open-source for a pre-seed budget, and multilingual coverage for clients reading in Spanish, French, Portuguese, Vietnamese. The chosen pair handles the Latin-extended range without falling back mid-paragraph.
The locked direction (P3), shown across every mockup at once.
End-to-end product campaign — photography, web design, and animation from a formal client brief at the Wix Playground Academy.

The Wix Playground Academy is a selective program covering studio photography, coding, animation, UX, art direction, and marketing. The program operates on real client briefs with professional production standards.
This project responded to a formal brief from Quip, a DTC dental company. The challenge was repositioning an everyday product as a desirable gift—shifting the framing from utility to experience. All photography, animation, and web design is original work.
I styled and photographed the Quip product line with a focus on clean, elevated compositions that reinforce the gifting narrative. The approach established a repeatable structure across shot types—detailed close-ups, full-set arrangements, and lifestyle contexts—to support different marketing surfaces.

Featured on quip's official instagram page

The landing page concept positions Quip as an ideal gift, structuring the narrative around packaging elegance and subscription value. Clean typography and generous whitespace let the photography carry the persuasion—a deliberate choice to prioritize image quality over layout complexity.
Website design and information architecture for a digital-first career coaching platform — translating app product into a marketing-ready web presence.
Chea Seed is a digital-first career coaching platform—a career fitness tracker that helps users navigate raises, reviews, job transitions, and professional confidence. The product existed as an app; the website needed to translate that experience into a compelling marketing surface.
Working directly with the CEO and lead developer, I mapped the app's structure, brand personality, and user value into a scalable framework for the site—product information, company backstory, blog, and career resources. The challenge was structuring content for two distinct audiences: new visitors discovering the product and returning users seeking resources.
The wireframing process established information hierarchy and user flow before committing to visual design. Each page structure was defined around a clear decision: what does this audience need first, and what can wait? This prioritized scannability and conversion paths over visual density.
A browser tab that looks like a mirror until you rub your hands together and summon a plasma orb. The physics aren't real, but the feeling of holding one is.
It started as a question about hand tracking. Could a browser do something genuinely physical-feeling without any hardware beyond a laptop camera? I'd been reading about MediaPipe, a machine learning library that maps 21 landmarks across each hand in real time, and I wanted to know what you could do with it if the goal wasn't utility.
The reference point was Dragon Ball Z. That childhood feeling of watching someone cup their hands and gather energy between them, something invisible made tangible through concentration and gesture. Not the show's aesthetic literally, just the sensation of it. Could a browser window make someone feel like they were holding something that wasn't there?
Turns out you can, as long as every design decision protects the illusion.
The whole thing runs in the browser. No app, no install, just a URL and camera access. MediaPipe tracks both hands at the same time, mapping 21 landmark points per hand at roughly 30 frames per second, and the orb's position, size, and energy state are derived entirely from those landmarks: where your palms are, how far apart your hands are, the pinch distance between your fingers, how fast your hands are moving.
The visual system is built in layers on an HTML canvas (outer haze, plasma surface deformed by simplex noise, inner core, corona tendrils, spark particles), each layer responding to the orb's energy state. At rest it breathes slowly. Charged, it pulses and throws sparks. At maximum energy the color ramps from purple through orange to white-hot and the containment warning trips.
The aesthetic direction was "scientific instrument meets occult artifact": clinical telemetry presentation of something obviously magical. The humor is entirely in the contrast, with deadpan readouts like PLASMA TEMP: 4,200K and CONTAINMENT STATUS: NOMINAL for what is clearly a glowing fantasy fireball. The widgets take themselves completely seriously, which is the whole joke.
No hand skeleton. My first instinct was to render MediaPipe's landmark points as a visible overlay, dots and lines showing the tracking in real time. I cut it immediately because the moment you can see the mechanism, the illusion collapses. The orb and particles are the only evidence anything is happening, and the technology has to disappear.
Thermal color, not rainbow. A rainbow ramp reads as tech demo, since it signals "look what this can do" rather than "feel what this is." A thermal ramp from deep purple at rest through red-orange and gold up to white-hot reads as energy, and purple as the cool state felt magical where blue would have felt clinical. Color is doing the emotional work.
Full-screen camera, not a widget. A small mirror in the corner makes the effect feel like a feature. Full-screen puts you inside the experience, so you're not looking at a tool, you're in a room with an orb. That shift in scale is what makes the whole thing feel architectural instead of gimmicky.
Discovery over instruction. There's no tutorial anywhere on the page. You show your hands and something sparks. You bring them together and it charges. You figure out you can throw it. The experience tracks which gestures you've found in-session and only surfaces hints for the ones you haven't, so the interface feels like it's paying attention to you specifically. Finding each mechanic on your own is the experience.
Fiction as design language. The telemetry numbers (energy percentage, plasma temperature, containment status, throw velocity) are completely made up. There's no real frequency, no actual temperature threshold. But I made them up carefully, because the metrics were the language I used to think through how the orb should look and feel at each charge state. Energy maps to color, frequency maps to how the surface pulses, and temperature is why it shifts from purple to white-hot. The values are invented but they correlate exactly with what's on screen, which is what makes the readouts feel credible.
Japanese as authorship, not decoration. Widget headers run in Japanese with monospaced English subtitles beneath: 気力測定 / ORB TELEMETRY, 手の動き / HAND METRICS. The page title is a single kanji. A small watermark credits 気力研究所, a completely fictional Ki Energy Research Institute. These aren't translation choices, they set a register: terse, slightly ceremonial, like the object has its own institutional backstory. People who read Japanese catch a different layer of the joke. People who don't just think it's a design choice.
Monospaced throughout, no display type. The original build spec included Bangers, the chunky comic font, as a nod to the source material. I replaced it with Space Mono across everything because Bangers reads as costume; mono type reads as instrument. The gap between "I know what this is referencing" and "this looks like a fan site" is entirely typographic.
Orbs is the first in an ongoing series of hand-tracking experiences built on the same interaction engine. Each release swaps the orb material, widget aesthetic, and corner graphic treatment, while the gestures and physics stay consistent. The first skin is this one: comic book source material, Japanese branding, thermal color. Future skins will run on the same engine with completely different visual registers.
The format is a numbered drop series, and this is Issue 01.
Orbs isn't solving a problem. It was a question I had: could I make someone feel something through a screen they'd never felt before? Answering it meant learning a graphics pipeline, a machine learning library, a noise algorithm, and a Web Audio API I'd never touched, mostly to protect an illusion most people will experience for about ninety seconds and immediately want to show someone else.
That loop (the thing you stumble into, want to understand, and immediately hand to another person) is what I was designing toward, and getting there meant learning how to build it.
Allow camera access. Best in Chrome or Safari on a laptop.
Something to say about this? A gesture that didn't work, a skin you want to see, a feeling it gave you. I'm genuinely curious.
An operating system for my freelance brand work. Four tools that share state and feed each other (intake, mixer, proposal, asset hub), built so I can deliver an engagement end-to-end without stitching together a stack of third-party apps.
I built Groundwork because the standard brand engagement process is broken. You spend weeks on a system, then put it in a PDF and ask a client to imagine how it all fits together. It never works.
Groundwork is four tools that share state and feed each other — and the AI intelligence running underneath is what makes it work.
After a client call, I dump my meeting notes in. The intake generates custom questions on top of my default set — adding things specific to the project I might not have thought to ask, flagging compliance considerations, surfacing contradictions in what the client said they wanted. When they submit, I get an email synthesizing their answers into prioritized next steps and callouts.
Those same answers drive everything downstream. The proposal auto-generates from intake responses — no rewriting scope language for every client. The asset checklist builds itself from their specific needs, dimensions, and delivery contexts. And the brand mixer gives clients a live environment to explore directions in real time, within combinations I've already vetted as design-coherent. They feel like they're driving. The rails are mine.
First engagement I used it on: sign-off that usually takes a week happened in one afternoon.
The intake generates itself from my notes. After the first call or two with a client, I dump my meeting notes in, and it builds a custom questionnaire on top of my default question set, adding things specific to the project that I might not have thought to ask. Compliance constraints I should be flagging, execution styles that fit their timeline, platform-specific things. I can edit anything before it goes out.
What makes it work for the client is that it's not a generic form. The questions help them figure out what they actually want and need to prioritize, not just collect their answers. When they submit, I get an email with a synthesis and a prioritized list of next steps I can take or leave.
That same submission also drives the proposal and the asset checklist downstream, which is why it sits at the front of the workflow.
The mixer is the visualization layer, and I use it as a follow-up to a normal presentation. I'll walk a client through each brand element on its own first (typography, color palette, logo marks) and then send them the mixer link so they can live with it and play with combinations themselves.
The thing that makes it useful and not chaotic is that I curate the combinations they can actually try. I've constrained the pairings to ones I've already vetted as design-coherent, so they're not freely combining a serif with a clashing palette. They feel like they're driving, but the rails are mine.
On a recent engagement this saved a few days of back-and-forth. The client landed on a combination they loved, and I could tell from their reactions that things finally clicked once they saw the brand in context instead of in the abstract.
What the mixer is and isn't. The mixer is a sign-off tool for direction, not a UI/UX generator. By the time a client sees it, I've already done the thinking — narrowing the options and locking the variables down to a final set of decisions on each element. From there, the mockups themselves are kept purposefully generic. They're meant to show how brand elements interact with each other and come to life across a few real-world settings, not to stand in for actual product or marketing design. Detailed UX work is a distinct process and a separate phase.
That separation is intentional. I want the mixer to hold me accountable for thinking through how typography, color, and logo move together as a system — without letting AI shortcut the slower thinking that real marketing assets, templates, and product screens require. This is AI in its lane: helping me process, brainstorm, and visualize. Not generating finished work. The thinking about each element happens behind the scenes; the mixer just brings it to life.
The proposal is an e-signable scope agreement that auto-generates from the intake answers and any notes I've added since. It's a template that gets customized to the engagement automatically, then locked once both parties sign. The same answers that drive my internal priority list also drive the proposal, so I'm not rewriting scope language for every client.
The asset hub is the tool that surprised me by being more useful internally than externally.
The checklist is AI-generated from the intake answers, my meeting notes, and any exceptions I've added. It's pre-loaded with standard brand asset package requirements, so it knows what a typical handoff needs (logo variations, color tokens, type license, favicon kit) and tailors the list to whatever the engagement specifically calls for.
What's actually been most useful is that I use it before I hand anything off. As I build assets, I check them in against the list, and it surfaces what's still missing, flags inconsistent file naming, and keeps the folder structure clean. By the time I'm ready to share with the client, the work has already been quality-controlled against the original requirements. When I do share, the client can preview every asset, download the full package as a zip or grab individual files, and forward the link to anyone who needs it.
Groundwork started as a way to deliver one engagement end-to-end and turned into how I deliver every brand engagement. The unexpected piece was how much of its value flows toward me, not the client. The client gets a faster, clearer workflow, and I get a memory system that compounds across projects.
The shape of the next iteration is already visible from where this one leaves off: a presentation layer that replaces the static slides I still walk clients through live, and a revision portal for the back-and-forth that happens after the mixer phase. Both are versions of the same idea, which is moving more of the workflow inside the tool, where it can compound.
Using something like this, or wish you were? I'm building in the open and would love to hear what resonates or what's missing.












I'm Talia, a brand and visual designer based in Brooklyn. I build design systems, campaign frameworks, and visual identities that hold up at scale — shaped by six years in-house at Uncommon Goods and Jonathan Adler, plus 25+ freelance brand builds for startups and small businesses since my time at USC.
My work sits at the intersection of brand craft and operational thinking — translating strategy into production-ready systems across digital, print, and retail, and collaborating cross-functionally with engineering, marketing, and product teams. I'm increasingly drawn to how AI tools reshape creative workflows, and I use them daily in my own practice.
View resume