All Topics

Top

All Topics

Hello, Checking out this answer is helpful, except if the value of the column is multivalue. How do you remove the blue hyperlink color and background from an image when you click on it?   <html>... See more...
Hello, Checking out this answer is helpful, except if the value of the column is multivalue. How do you remove the blue hyperlink color and background from an image when you click on it?   <html> <style> #tableWithDrilldown2 table tbody tr td,#tableWithDrilldown2 table thead th a{color: white !important;} </style> </html> <table id="tableWithDrilldown2">   This is the value of one of the columns in our dashboard table.   | eval events=case( pt="1-acc",TIME." ".dur." ".pt." Code=".cde." ".certStatus." for ".user, pt="2-pay",TIME." ".dur." ".pt." ".user." Code=".cde." ".week." $".gross." Bal: $".bal, pt="3-new",TIME." ".dur." ".pt." ".user." Code=".cde)   This is an example of the output of only two users. The combination of events can be in different orders and have 1, 2, or all 3 of the types of events (acc, pay, new). NOTE: The time between the set of the events of the two users is a coincidence, and not common.   19:35:09.3 10.7 1-acc Code=0000 Success for Jessie 19:35:19.2 09.8 3-new Jessie Code=2801 19:36:56.3 01:37.1 2-pay Jessie Code=0000 03-27-2021 $0 Bal: $17250 19:45:09.3 10.7 1-acc Code=0000 Success for Billie 19:45:19.2 09.8 3-new Billie Code=2801 19:46:56.3 01:37.1 2-pay Billie Code=0000 03-27-2021 $0 Bal: $17250     Thanks and God bless, Genesius
Hi need to calcualte duration bettween each Out/In where A=A+100 B=B IDS=IDS 00:03:02.067 app catZZ_DDP_AP: O[host]A[1000]B[123456]IDS[123456789987] 00:03:02.110 app catZZ_DDP_AP: I[host]A[1100]B[... See more...
Hi need to calcualte duration bettween each Out/In where A=A+100 B=B IDS=IDS 00:03:02.067 app catZZ_DDP_AP: O[host]A[1000]B[123456]IDS[123456789987] 00:03:02.110 app catZZ_DDP_AP: I[host]A[1100]B[123456]IDS[123456789987] expected output: duration                          B                            IDS 00:00:00.043      123456     123456789987 Any idea? Thanks
Hi, I want to create a Correlation alert that will trigger and collect all the events from the same IP within a certain time. I try to "group by", but, not work   THX    
Hi  Is it possibe show decimal numbers on sandkey diagram? e.g my spl command produce this number 0.13 but on sandey diagram just show 0 Any Idea? Thanks
I am trying to filter out null values from the result of stats. Query looks like below.     index=someindex* some ((somefield1=value1 AND somefield2="value2") AND (somefield1=value3 ... See more...
I am trying to filter out null values from the result of stats. Query looks like below.     index=someindex* some ((somefield1=value1 AND somefield2="value2") AND (somefield1=value3 OR (somefield2=value4 AND somefield1=value5 ) ) ) OR (somefield1=value6) | eval someeval=... | replace "some*" with "SOME" in somefield1 | bucket _time span=1d as daytime | stats max(eval(if(somefield1=value1,_time,null()))) as val1_time min(eval(if(somefield1=value2,_time,null()))) as val2_time min(eval(if(somefield1=value3 ,_time,null()))) as val3_time by somefield3 somefield4 | eval recovered_time=if(isNotNull(val2_time),val2_time,val3_time) | where isNotNull(val1_time)     But this query returns result with null or empty val1_time also. What could be the issue in this query? I further pass the result of this query to another stats query.  But I am stuck here.
Hi, Am trying to do an index time masking where my data is not in _raw but in a separate field A. For example A field has the following data "Path=/LoginUser Query=CrmId=ClientABC& ContentItemId=T... See more...
Hi, Am trying to do an index time masking where my data is not in _raw but in a separate field A. For example A field has the following data "Path=/LoginUser Query=CrmId=ClientABC& ContentItemId=TotalAccess&SessionId=3A1785URH117BEA&Ticket=646A1DA4STF896EE& SessionTime=25368&ReturnUrl=http://www.clientabc.com, Method=GET,IP=209.51.249.195, Content=", ""  I have applied transforms rules as below, [session-anonymizer] SOURCE_KEY = field:A REGEX = (?m)^(.*)SessionId=\w+(\w{4}[&"].*)$ FORMAT = $1SessionId=########$2 DEST_KEY = field:A The problem is when we give the DEST_KEY as _raw it is masked properly, But I need the masked data back to field A. How do we get this masked to field:A I have also tried adding  [accepted_keys] is_valid = field:A
Hi How can I hide "code" row from output of lookup comand ? .... | lookup myfile.csv code OUTPUT description FYI: i have some stats before lookup so don't want use "table" command.   Any idea? ... See more...
Hi How can I hide "code" row from output of lookup comand ? .... | lookup myfile.csv code OUTPUT description FYI: i have some stats before lookup so don't want use "table" command.   Any idea? Thanks,
Is there a way to set permissions for MLTK model files in the local.meta file?
Hi! Thanks for your help. I have a question. All this in Dashboard Studio.   I need to add a digital clock (hh:mm:ss) to the dashboard, that looks nice and shows me the time in real-time. Als... See more...
Hi! Thanks for your help. I have a question. All this in Dashboard Studio.   I need to add a digital clock (hh:mm:ss) to the dashboard, that looks nice and shows me the time in real-time. Also, the dashboard is updated every minute, and we need to show the time (hh:mm:ss) it was updated in another panel (We don't want to use ShowLastUpdated code) Regards!
Hi, I have a radio button with 3 choice values. When any of the radio button is clicked or hovered it should show me some message. Can you please help me with the code? Example: When hovered/click... See more...
Hi, I have a radio button with 3 choice values. When any of the radio button is clicked or hovered it should show me some message. Can you please help me with the code? Example: When hovered/clicked on TR Details it should show message as 'TR' and similarly When hovered/clicked on TR DUE it should show message as 'DUE' Below is my radio button code <input type="radio" id="landscape" token="TR"> <label>Landscape</label> <choice value="TR Details">TR Details</choice> <choice value="TR DUE">TR DUE</choice> <change> <condition label="TR Details"> <set token="TR view">TR view</set> <unset token="TR DUE">TR DUE</unset> </condition> <condition label="TR DUE"> <set token="TR DUE">TR DUE</set> <unset token="TR view">TR view</unset> </condition> </change> </input>
I have about 10 indexers, a cluster. For some reason my "master node" turned off and when it turned on. my data has disappeared. there were 18 million data, and it became 9 million for what reason co... See more...
I have about 10 indexers, a cluster. For some reason my "master node" turned off and when it turned on. my data has disappeared. there were 18 million data, and it became 9 million for what reason could this happen? I can't find anything in the logs. HELP PLS
Dunt da dah dah! I present your .conf21 Platform highlights! The Splunk Platform is your partner in cloud transformation, helping you…. 1. Gain end to end visibility to thrive in this hybrid age ... See more...
Dunt da dah dah! I present your .conf21 Platform highlights! The Splunk Platform is your partner in cloud transformation, helping you…. 1. Gain end to end visibility to thrive in this hybrid age You love Splunk for the ease of quick access and ability to get insights out of machine data, and now we are adding the ability to filter and route your data to other endpoints, making it even easier to ingest and analyze your data in Splunk. Simply redact, filter, and route “noisy” data from the indexer layer with Ingest Actions (Preview) Enjoy a modernized, simple data onboarding experience with our new Data Manager (Preview) in Splunk Cloud Platform  Leverage these tools and additional resources to bring in even more data sources, including telemetry with the Splunk OpenTelemetry (Otel) Connector for Kubernetes (Preview) Whether you store your data in our cloud offering or yours, you can reduce costs with a variety of flexible storage offerings: Explore the new Flex Index (Preview) in Splunk Cloud Platform for affordable search and storage of your less-time-sensitive data  Decrease infrastructure costs with Smart Store now available on Microsoft Azure (Preview)  Align your spend with value with Workload Pricing, including enhanced management tools such as the Workload Pricing Dashboards in the Cloud Management Console 2. Extend Investigation and drive faster action for all data wherever it lives No matter where data lives, investigate it with a more robust, faster, and reimagined search experience.  Enjoy a seamless, unified search experience across Splunk deployment types with Federated Search  Search faster and with greater control with increased scale, processing, and self service app capabilities with the Victoria Experience in Splunk Cloud Platform Look into the future and explore our reimagined search & reporting (Preview) with our next-generation, cloud-native search experience Efficiently visualize your search results and drive faster action. Try out Dashboard Studio, our new and intuitive dashboard-building option for communicating even your most complex data insights Take action from anywhere with access to all your dashboards on-the-go with Splunk Mobile and TV Quickly tackle new use cases with Splunk Product Guidance, offering in-product walkthroughs, helpful guides, and access to relevant how-to articles  3. Customize Splunk YOUR way  Deploy and manage Splunk with a redefined admin experience - whether you are in the cloud, on-prem, or both.  Effortlessly install, scale, and manage on your choice of cloud environment with Splunk Operator for Kubernetes Your self-management is streamlined with admin improvements (Preview) including configuration change tracking, health report enhancements, and much more Plus, we have your back with upgrade readiness tools to easily prepare for Python 3 and jQuery 3.5 migrations and ensure your environment is ready for updates With Splunk, if you can imagine it, you can build it!  Simply find or develop any app for your unique business needs with our new Splunkbase (Preview) that offers a modern user experience and curated collections and categories Eager for more details on these awesome updates? Lucky for you, you can find them in our .conf21 Platform blog, check out what’s going on right now, or watch everything on demand after on .conf online. Excited?! Let’s see it in the comments! Judith - Platform Product Marketing (also author of your monthly SaaSy (Splunk-as-a-Servicey) Updates for Splunk Cloud Platform)   
Hello, Community!  We made several announcements at .conf21 that we are excited to share with you, in case you missed them. Enterprise Security Coming soon: Enterprise Security Cloud is packed ... See more...
Hello, Community!  We made several announcements at .conf21 that we are excited to share with you, in case you missed them. Enterprise Security Coming soon: Enterprise Security Cloud is packed with new capabilities to give security teams insights in order to drive faster detection and response, and continues to build on the capabilities previously announced. Here are the highlights: Executive Summary Dashboard: The new Executive Summary dashboard surfaces key performance indicators that provide insights on the overall health of the SOC and facilitates reporting to CISOs and other senior leaders. The Executive Summary Dashboard allows you to quickly assess the following: Mean Time to Triage Mean Time to Resolution Investigations Created Risk-Based Alerting Trends And More! Security Operations Dashboard: The Security Operations Dashboard shares key insights but provides deeper analysis for SOC managers and team leads. Previously, Enterprise Security introduced a dispositions feature of incident review that allowed you to record whether an event was a true positive, false positive, or benign positive. Coming soon, you will see and report on this data over time, and get a deep dive into exactly which correlation sources contribute to each of the four default disposition types. Cloud Security Monitoring Dashboard: We are also enhancing the  Cloud Security Monitoring Dashboard to give you new dashboards like AWS Security Groups, AWS IAM Activity, as well as new dashboarding to capture Microsoft 365 data. Automated Real-Time Content Updates: We are also adding in-product, automated real-time content updates so that you can get the latest security content from the Splunk Threat Research Team, as soon as it is available, with one click! Behavioral Analytics for Security Cloud (Preview) Splunk Behavioral Analytics for Splunk Security Cloud, now in Preview, provides threat detection using streaming security analytics capabilities to uncover unknown threats and anomalous user and entity behavior. Augment your SIEM in the cloud with real-time search and analytics in addition to traditional search-based correlations and batch analytics to ​​accelerate your mean time to detect and spend more time hunting with higher-fidelity, risk-based behavioral alerts. SOAR Splunk SOAR’s new and improved Visual Playbook Editor makes it easier than ever to create, edit, implement and scale automated playbooks to help your business eliminate security analyst grunt work, and respond to security incidents at machine speed.  Splunk SOAR apps are now available on Splunkbase, providing you with a one-stop shop to extend the power of SOAR.  Splunk SOAR’s new App Editor allows you to create, edit, and test apps all from one place, making the app development experience easier and faster than ever.  Splunk Intelligence Management (TruSTAR) The Splunk Intelligence Management technology, formerly TruSTAR, breaks down data silos within and across enterprises to align security effectiveness with business objectives, improving cyber resilience and operational efficiency. The unified intelligence API delivers insights directly into your Splunk Security products, and joint customers benefit from the ability to: Reduce noise from intel sources to automatically improve alert prioritization Easily share threat intelligence data across teams, tools, and sharing partners Drive efficiencies in Splunk SOAR playbooks with enrichment based on normalized intelligence SURGe The complexity of security threats is increasing exponentially. Having access to expert knowledge, refined processes, and best-of-breed technologies can enable organizations to stay proactive in securing their business. SURGe is a team of Splunk security experts, threat researchers, and advisors that support security teams during high-profile, time-sensitive cyberattacks with timely contextual awareness and initial incident response techniques. Beyond being an advisor and trusted partner for customers during high-profile security incidents, SURGe will also provide security research on a variety of security topics via blogs, long-form whitepapers, webinars, presentations, and many more types of content. By leveraging SURGe’s technical guidance and security research, security teams can find clarity amidst chaos, reduce their mean-time-to-detect, and reduce their mean-time-to-respond. You can learn more about SURGe, read about our latest research on detecting supply chain attacks, or sign up for alerts on high-profile security incidents on our website. For more details, screenshots, and more about all the cool stuff we announced at .conf21, check out our full announcement blog. For the full scoop on what's coming to Enterprise Security, check out the "What’s New in Enterprise Security" .conf21 session. Also, be sure to check out the Security Super Session for a full picture on Security, and be sure to check out all the awesome SOAR sessions!
Greetings Admins, Ops leaders, SREs, Developers, and then some! It’s my first time posting for you all here, in our Splunk Community, so I wanted to take a moment to introduce myself and share some o... See more...
Greetings Admins, Ops leaders, SREs, Developers, and then some! It’s my first time posting for you all here, in our Splunk Community, so I wanted to take a moment to introduce myself and share some of our exciting updates coming with our .conf21 event this week! I’m Nico, and I work as part of our Observability Product Marketing team here at Splunk! I have a passion for our users, and I love to learn about all of the inventive and amazing things that they’re doing with observability in order to ship code faster and provide amazing customer experiences! Now…on to the exciting stuff! In order to help you address complexity and provide you with a comprehensive solution to monitor and troubleshoot any issue in your environment, we continue to invest in our enterprise-grade observability capabilities at Splunk. These include Splunk Observability Cloud, Splunk IT Service Intelligence (ITSI), and Splunk IT Essentials Work.  We’ve got some hot new innovations we’re adding to Splunk’s Observability portfolio, to help you solve your modern monitoring challenges.  Here is a Summary of our Observability Announcements: Splunk Observability (Log Observer) integration with Splunk Enterprise (Preview)* Splunk APM AlwaysOn Profiling (Preview)* Splunk APM Database Visibility (Preview)* Splunk RUM for Mobile Apps (GA) Splunk Infrastructure Monitoring AutoDetect (Preview)* Splunk Observability Cloud for Mobile (GA) OpenTelemetry eBPF Collector Donation IT Essentials Learn & Work (GA) Splunk App for Content Packs (GA) New Microsoft365 Content Pack (GA) New 3rd party APM Content Pack (GA) New Synthetic Monitoring Content Pack (GA) New Observability Cloud Content Pack (Preview) And now…on to the Details of our Innovations: Observe Any Environment with Deeper Integrations and Expanded Use Cases  First, we are previewing the Splunk Observability integration with Splunk Enterprise via Splunk Log Observer. This integration will let you use the Log Observer interface directly within Observability Cloud and access data you’re already sending to your existing Splunk instances. (Freebie note!…If you happen to be an existing Splunk Enterprise customer who has Splunk Infrastructure Monitoring, Splunk APM, or Splunk Observability Cloud licenses, you can leverage Splunk’s intuitive Log Observer Interface at no extra cost, and usually without having to write any new SPL. And for you developers and service owners out there, we are also previewing AlwaysOn Profiling in Splunk APM, to provide visibility of code-level performance, linked to trace data, in order to troubleshoot production issues faster.  To further assist in troubleshooting and optimization, Splunk APM’s Database Query Performance, now in preview, might be worth checking out too, as it helps find issues faster in distributed systems by showing queries and latency specific to a service and database interaction.  With the general availability of Splunk RUM for Mobile Apps, we’ve added end-to-end visibility of native mobile apps to help monitor and deliver great customer experiences on iOS and Android. Splunk RUM now supports both web browsers and mobile apps, with end-to-end tracing to backend services, to get you the complete picture of the end-user experience. With significant momentum planned for Splunk Synthetic Monitoring, we continue to deepen Splunk’s digital experience monitoring capabilities with extended full-fidelity visibility to help you deliver a great customer experience.   In addition to these new innovations, we’re excited to announce that we are going mobile! Splunk Observability Mobile enables on-call SREs and developers to access all critical Observability Cloud dashboards and alerts on the go (freedom from your desktop!). It provides intuitive visualizations allowing you to better understand alert details right from your Apple or Android phone for faster triage, or to simply view your real-time dashboards to check up on the health of your environments. Mobile access is included with any Splunk Observability Cloud license. Free Out-of-the-Box Capabilities for Faster Time to Value If that wasn’t enough, we’re also previewing a new feature in Splunk Infrastructure Monitoring called AutoDetect, which automatically discovers infrastructure anomalies such as high container restarts, or pods remaining in pending status, and intuitively incorporates alert status into dashboards. This simplifies the onboarding process and accelerates time-to-value via out-of-the-box problem detection for critical components.  Additionally, the new Splunk App for Content Packs acts as a one-stop shop for prepackaged content to address common monitoring and troubleshooting use cases in our IT Service Intelligence (ITSI) and IT Essentials Work products — including new Content Packs for managing Microsoft 365, Third-party APM tools and Synthetic Monitoring. Lastly, we are previewing a new content pack for Observability Cloud, which provides integration with data from Splunk APM, Splunk Infrastructure Monitoring, and Splunk Synthetic Monitoring into a single, unified Service Analyzer view within Splunk ITSI for complete, full-stack service visibility and management. Finally, as we recently announced at KubeCon, we will continue our leadership and contributions to OpenTelemetry with the donation of the eBPF Collector. Based on the technology acquired last year from Flowmill, the collector enables network observability for modern cloud applications. Specifically, the eBPF Collector allows accurate, complete models of cloud network dependencies and service health to be built without any changes to code or container images. Want to try Some of our new Goodness on for Size? Or Want to Learn More? Most of this is available through our existing free trial experiences. And you can always learn more about the new generally available features in our Splunk Docs. If you haven’t explored Splunk’s Observability portfolio yet, you can dive right in here to see how you can expand your use cases, and make your operations better - and life easier! Thanks for reading through these cool new observability updates! Make sure to connect with us on what you’re most excited about! Make a comment below, if you'd like to share your feedback with us  As Janet Jackson would say… it’s O11y for you! — Nico
Need help for the below, The sourcetypes has different values in it like below,  index=a sourcetype=b |eval details=1 | append [|search index=c sourcetype=d|eval details=2] | append [|search index... See more...
Need help for the below, The sourcetypes has different values in it like below,  index=a sourcetype=b |eval details=1 | append [|search index=c sourcetype=d|eval details=2] | append [|search index=e sourcetype=f|eval details=3] |eventstats count by details| Pass%=count(pass)/total*100,2 Fail%=count(fail)/total*100,2 Error%=count(Error)/total*100,2 |table pass fail error total I have a barchart with x-axis with details and y-axis %(pass%,fail%,error%) of ( pass fail error etc).When i click the details(x-axis) in barchart , the single value should show number of individual Pass,fail,error in trellis. Please let me know how this can be achieved .
Hello All, Wondering if anyone can help? I am currently looking at RBA and adding a multiplier to any users that are leaving. At first glance, I was wondering whether to look at risk_object_endDate=... See more...
Hello All, Wondering if anyone can help? I am currently looking at RBA and adding a multiplier to any users that are leaving. At first glance, I was wondering whether to look at risk_object_endDate=*, but am now wondering how the lookup for identity works and if I can be clever and add a category "leaver" to the user (or risk_object_identity_tag that index=risk will pick up). From some research I think the identity lookup is being ran by many searches but mainly from ldapsearch. Does this mean it is picking up categories from LDAP? Not sure how to check what the lookup is running to fill it's contents.  Any help/guidance would be great! Thank you, J.
I have read the explanation on the mrsparkle dir via Solved: So I get the obvious Simpsons reference but what a... - Splunk Community but I am seeing a lot of instances of failed connections from th... See more...
I have read the explanation on the mrsparkle dir via Solved: So I get the obvious Simpsons reference but what a... - Splunk Community but I am seeing a lot of instances of failed connections from the indexer to the search head via port 8000 trying to initiate a connection with it.  Firstly - should this be trying to connect from the indexer to the search head, and if so, is it still valid that this is required? Then - why would that be failing if there are already connections from the indexer to the search head via 8000?    
Hello everyone,  I have tons of DNS queries in my enterprise on commercial legit domains (eg. partnerweb.vmware.com, login.live.com) which I don't want to log with Splunk Stream. My configuration is... See more...
Hello everyone,  I have tons of DNS queries in my enterprise on commercial legit domains (eg. partnerweb.vmware.com, login.live.com) which I don't want to log with Splunk Stream. My configuration is as follows but apparently it doesn't work: app: Splunk_TA_stream_wire_data props.conf [streamfwd://streamfwd] TRANSFORMS-blacklist-vmwarecom = vmware.com transforms.conf [vmware.com] REGEX=query\=partnerweb\.vmware\.com DEST_KEY=queue FORMAT=nullQueue Any help would be appreciated. Kind regards, Chris  
hi When I launch a dashboard, I have randomly the message below Waiting for the task to start in the queue. what does it means and how to avoid it please? rgds 
Hi All, I need your help in creating cron expression for alert schedule. I need to schedule a alert from Monday 02:00 - Saturday 00:30. If any other information is required please let me know. Any... See more...
Hi All, I need your help in creating cron expression for alert schedule. I need to schedule a alert from Monday 02:00 - Saturday 00:30. If any other information is required please let me know. Any help will be highly appreciated.    Thanks in advance.