All Topics

Top

All Topics

Hi, I have created multi select dropdown and when I tried to be dependable by passing dropdown token, it doesn't shows any value. <form> <label>TEST- Multi Select with distinct value</label> <f... See more...
Hi, I have created multi select dropdown and when I tried to be dependable by passing dropdown token, it doesn't shows any value. <form> <label>TEST- Multi Select with distinct value</label> <fieldset submitButton="false"> <input type="multiselect" token="flow"> <label>Select Flow</label> <choice value="*">All</choice> <default>*</default> <delimiter>,</delimiter> <fieldForLabel>FLOW</fieldForLabel> <fieldForValue>FLOW</fieldForValue> <search> <query>| loadjob savedsearch="Test_Data" | search adt="$adt$"</query> <earliest>0</earliest> <latest></latest> </search> <prefix>IN(</prefix> <suffix>)</suffix> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> </input> <input type="multiselect" token="adt"> <label>Select ADT</label> <choice value="*">All</choice> <default>*</default> <delimiter>,</delimiter> <fieldForLabel>adt</fieldForLabel> <fieldForValue>adt</fieldForValue> <search> <query>| loadjob savedsearch="Test_Data" | search flow="$flow$"</query> <earliest>0</earliest> <latest></latest> </search> <prefix>IN(</prefix> <suffix>)</suffix> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <initialValue>*</initialValue> </input> </fieldset> <row> <panel> <table> <search> <query>| loadjob savedsearch="Test_Data" | where FLOW $flow$ and adt $adt$ | table adt, FLOW, Date, NbRecordsOKFCR, CMTotal, NbRecordsOKCM, NBIntFile, NB1, NB2, NB3, NbErrorsCM, Alert | fields Date, adt, FLOW, CMTotal</query> <earliest>0</earliest> <latest></latest> </search> <option name="count">10</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>
Hi, I need to create the 2 drop down for date where user can manually select start_date and end_date. And based on that data will be filter and show data between two dates. Please help
Hello, I am using 2 multi select dropdown. When its on  the default value  'ALL' then it doesn't show any value in the table, after selection it works.    After open in search, it shows "*... See more...
Hello, I am using 2 multi select dropdown. When its on  the default value  'ALL' then it doesn't show any value in the table, after selection it works.    After open in search, it shows "*" in the value. | loadjob savedsearch="TEST" | where FLOW IN("*") and adt IN("*") | table adt, FLOW, Date | fields Date, adt, FLOW, CMTotal ------------------------------------------------------------------ Original Query-  <form>   <label>AAA_Test</label>   <fieldset submitButton="false">     <input type="multiselect" token="flow">       <label>Select Flow</label>       <choice value="*">All</choice>       <default>*</default>       <delimiter>,</delimiter>       <fieldForLabel>FLOW</fieldForLabel>       <fieldForValue>FLOW</fieldForValue>       <search>         <query>| loadjob savedsearch="TEST" | dedup FLOW</query>         <earliest>0</earliest>         <latest></latest>       </search>       <prefix>IN(</prefix>       <suffix>)</suffix>       <valuePrefix>"</valuePrefix>       <valueSuffix>"</valueSuffix>       <initialValue>*</initialValue>     </input>     <input type="multiselect" token="adt">       <label>Select ADT</label>       <choice value="*">All</choice>       <default>*</default>       <delimiter>,</delimiter>       <fieldForLabel>adt</fieldForLabel>       <fieldForValue>adt</fieldForValue>       <search>         <query>| loadjob savedsearch="TEST" | dedup adt</query>         <earliest>0</earliest>         <latest></latest>       </search>       <prefix>IN(</prefix>       <suffix>)</suffix>       <valuePrefix>"</valuePrefix>       <valueSuffix>"</valueSuffix>       <initialValue>*</initialValue>     </input>   </fieldset>   <row>     <panel>       <table>         <search>           <query>| loadjob savedsearch="TEST"           | where FLOW $flow$ and adt $adt$           | table adt, FLOW, Date, NbRecordsOKFCR, CMTotal, NbRecordsOKCM, NBIntFile, NB1, NB2, NB3, NbErrorsCM, Alert           | fields Date, adt, FLOW, CMTotal</query>           <earliest>0</earliest>           <latest></latest>         </search>         <option name="count">10</option>         <option name="drilldown">none</option>         <option name="refresh.display">progressbar</option>       </table>     </panel>   </row> </form> Please suggest.
I'm using Ansible to try to configure SPlunk Enterprise and more specifically- I want to create a user for the Splunk add-on TA-jira-service-desk-simple-addon However, I'm getting this error when tr... See more...
I'm using Ansible to try to configure SPlunk Enterprise and more specifically- I want to create a user for the Splunk add-on TA-jira-service-desk-simple-addon However, I'm getting this error when trying when trying to run my Ansible  "Status code was -1 and not [200]: Connection failure: [Errno 104] Connection reset by peer"   Is this because I'm using a free-trial for Splunk? I deployed Splunk using the Splunk Enterprise API on AWS and I can connect to Splunk web no problem. Here's my Ansible playbook: --- - name: Create Jira Service Desk User in Splunk hosts: splunk_sh gather_facts: false tasks: - name: Create user uri: url: "http://ec2-44-212-47-250.compute-1.amazonaws.com:8089/servicesNS/nobody/TA-jira-service-desk-simple-addon/ta_service_desk_simple_addon_account" method: POST user: "admin username" password: "admin password" body: "name=svc_jira&jira_url=test.url.com&username=test_username" status_code: 200 I redacted my admin username and password, but I tried using the URL above (Which is the DNS name) and I tried using just the IPv4 address with port 8089 and the endpoint and it gave me the same error. I made sure that port 8089 is also open on my AWS Security Group. What could be causing the issues?
I have a search along these lines     "duration: " | rex field=host "(?P<host_type>[my_magic_regex])" | rex "duration: (?P<duration_seconds>[0-9]+)" | chart count by duration_seconds host_type ... See more...
I have a search along these lines     "duration: " | rex field=host "(?P<host_type>[my_magic_regex])" | rex "duration: (?P<duration_seconds>[0-9]+)" | chart count by duration_seconds host_type limit=0 span=1.0     This is working exactly as expected. However, since I am doing count by ... for each host type, the histograms constructed for each host_type vary wildly.  The lines have such a different scale that overlaying them on the same axis is worthless. I need to either 1. create a different chart for each host_type (and not worry about the actual value of count) 2. normalize the y axis so that instead of the literal count, the max peak for all histograms is 1 (or 100 or whatever) I think I'll need a foreach command somewhere, but not sure what's the best route forward. Maybe there's a command similar to count that I should be using instead.
Splunk Lantern is a customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Getting Star... See more...
Splunk Lantern is a customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Getting Started Guides for a range of Splunk products, a library of Product Tips, and Data Descriptor articles that help you see everything that’s possible with data sources and data types in Splunk. This month we’re excited to announce the relaunch of the Splunk Success Framework, a comprehensive resource for Splunk program managers to create best-practice processes for Splunk implementation. While we’ve been heavily focused on updating this tool, we’ve also got some new articles to share with you. Read on to find out more. The Splunk Success Framework Being a Splunk program manager is an exciting role with a lot of responsibility. Helping your organization implement Splunk for the first time, or expand its investment in Splunk, means you play a big part in helping your organization realize maximum value. While your organization’s experts in using Splunk are busy with the technicalities of configuring the software, you need to form a plan for implementation. Your plan should make it easy for you to manage Splunk on a day-to-day basis, while ensuring that value is delivered from now to the future. Some of the things you’ll need to do include: Learn how to manage stakeholders and conduct effective QBRs that demonstrate the value of the purchase  Make sure your deployment is appropriately staffed and that the staff have access to training and understand their roles Understand the capacity of your deployment and have a backup and restoration plan prepared in case of failure Create processes for logging and data onboarding so everyone in the organization can get the value they need out of the Splunk platform With the Splunk Success Framework, you'll have access to a system of best practices that will help you meet these needs, helping you unleash the full potential of your data with Splunk. This comprehensive framework has been updated to include a brand-new Fundamentals section, improved navigation, and fresh tips from Splunk experts. The four functional areas covered in the framework include program management, people management, platform management, and data lifecycle management. The best practices in the framework are flexible and modular, allowing you to tailor them to your organization's unique requirements.  Organizations implementing Splunk from scratch can have different needs than those who have been working with Splunk for some time. Because of this, all of the best practices within the framework are aligned with three adoption levels appropriate to your organization’s stage of its Splunk journey - standard, intermediate, and advanced - so you can choose the one that best fits your priorities, needs, and goals.     The Splunk Success Framework has been designed by experts at Splunk who have overseen scores of customer implementations and seen first-hand what works best. All of these learnings are captured within the framework to help you implement Splunk successfully, get value more quickly, and enable your organization to think differently about data and its potential. Check out the Splunk Success Framework today, and please let us know what you think! What Else? We published several new articles and made updates to existing content throughout January. We’re also on the cusp of announcing some exciting new updates to our Use Case Explorers, which we’ll be blogging about next month. Our new articles you might be interested to see include: Sampling data with Ingest Actions for data reduction Knowing your financial services customer Troubleshooting database performance Troubleshooting a service latency issue related to a database query We hope you’ve found this update helpful. Thanks for reading! Kaye Chapman, Customer Journey Content Curator for Splunk Lantern
Is there a way to find out what the oldest events are, by index,  in the local cache when running SmartStore? I am able to ssh in and look at the local buckets, but is there a way to see it in the mo... See more...
Is there a way to find out what the oldest events are, by index,  in the local cache when running SmartStore? I am able to ssh in and look at the local buckets, but is there a way to see it in the monitoring console or by query?   Thanks!   Joe
Hi Experts ,  Is there any way we can have different colors on Bar chart based on X-Axis values , below is code to my bar chart, i want to have different color bars for different countries (X-Axis ... See more...
Hi Experts ,  Is there any way we can have different colors on Bar chart based on X-Axis values , below is code to my bar chart, i want to have different color bars for different countries (X-Axis values). <search base="base_search"> <query>|search Country=$Country$|stats dc(conversation-id) by Country</query> </search> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisY.abbreviation">auto</option> <option name="charting.axisY.minimumNumber">1</option> <option name="charting.axisY.scale">log</option> <option name="charting.axisY2.abbreviation">auto</option> <option name="charting.axisY2.enabled">1</option> <option name="charting.axisY2.scale">log</option> <option name="charting.chart">column</option> <option name="charting.chart.overlayFields">count</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.drilldown">none</option> <option name="charting.fieldColors">{"CD";10": 0xFF0000, "IND": 0xFF9900, "ZA":0x008000}</option> <option name="charting.legend.placement">right</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> <option name="trellis.size">large</option> <option name="trellis.splitBy">Country</option> </chart>    
Is there a way to one-click reset all inputs back to their default values in dashboard studio? I have 7 different inputs (dropdowns and text) that are being used as filter criteria for a table. I wou... See more...
Is there a way to one-click reset all inputs back to their default values in dashboard studio? I have 7 different inputs (dropdowns and text) that are being used as filter criteria for a table. I would like a way to click "something" and have them all set back to their respective default values. I have done something similar for another dashboard that resets tokens that have been set based on clicked rows in charts/tables (just using a single value panel and setting all of the tokens), but I don't see a way to do this for inputs. Reloading the dashboard doesn't set them back to default either. It requires exiting the dashboard and relaunching. Thanks Craig
Hello from Splunk Data Manager Team, We are excited to announce the preview of the new Kubernetes Navigator for Splunk Observability Cloud. Before you search through previous conversations looking f... See more...
Hello from Splunk Data Manager Team, We are excited to announce the preview of the new Kubernetes Navigator for Splunk Observability Cloud. Before you search through previous conversations looking for assistance, we want to provide you with some basic information and quick resources. Want to access product docs? The Infrastructure Monitoring User Manual  offers detailed guidance on the interfaces provided by the new Kubernetes Navigator Want to request more features? Add your ideas and vote on other ideas in the Infrastructure Monitoring category via the Splunk Ideas Portal Please reply to this thread for any questions or get extra help!
Numeral system macros for Splunk Examples of Single Value panel and Table. Hello, Just an announcement. I have created macros that converts a number into a string with a language specific expre... See more...
Numeral system macros for Splunk Examples of Single Value panel and Table. Hello, Just an announcement. I have created macros that converts a number into a string with a language specific expressing (long and short scales, or neither). It was released on splunkbase. https://splunkbase.splunk.com/app/6595 Language-specific expressions may be useful when displaying huge numbers on a dashboard to make their size easier to understand. Or it may help us to mutually understand how numbers are expressed in other languages. Ref.: About long and short scales (numeration system). https://en.wikipedia.org/wiki/Long_and_short_scales Example of Use: Sample for English speakers | makeresults | eval val=1234567890123, val=`numeral_en(val)` | table val 1 trillion 234 billion 567 million 890 thousand 123 Provided macros: numeral_en(1) : Short Scale for English speaker numeral_metric_prefix(1) : Metric prefix. kilo, mega, giga, tera, peta, exa, zetta, yotta numeral_metric_symbol(1) : Metric symbol. K, M, G, T, P, E, Z, Y numeral_jp(1) : 万進法 for Japanese speaker. 千, 万, 億, 兆 numeral_kr(1) : for Korean speaker. 千, 萬, 億, 兆 numeral_cn_t(1) : Chinese with Traditional Chinese characters. 千, 萬, 億, 兆 numeral_cn(1) : Chinese with Simplified Chinese characters. 千, 万, 亿, 兆 numeral_in_en(1) : for India, South Asia English. thousand, lakh, crore, lakh crore numeral_in_en2(1) : for India, South Asia English. thousand, lakh, crore, arab numeral_nl(1) : Long Scale for Nederland. duizend, miljoen, miljard, biljoen numeral_fr(1) : Long Scale for French. mille, million, milliard, billion numeral_es(1) : Long Scale for Spanish speaker. mil, millón, millardo, billón numeral_pt(1) : Long Scale for Portuguese speaker. mil, milhão, bilhão, trilhão Followings also provided since v1.1.1 numeral_binary_symbol(1) : Binary symbol. KiB, MiB, GiB, TiB, PiB, EiB, ZiB, YiB, RiB, QiB numeral_binary_symbol(2) : Binary symbol with arg for rounding digits.     See Next article "How to convert large bytes to human readable units (e.g. Kib, MiB, GiB)" More details See Details tab on https://splunkbase.splunk.com/app/6595 Install this add-on into your search heads. Advanced examples Sample usage for using all provided macros. With rounding lowest 3 digit if over 6 digit. | makeresults count=35 | streamstats count as digit | eval val=pow(10,digit-1), val=val+random()%val.".".printf("%02d",random()%100) | foreach metric_prefix metric_symbol binary_symbol en es pt in_en in_en2 jp kr cn_t cn nl fr [eval <<FIELD>>=val] | table digit val metric_prefix metric_symbol binary_symbol en es pt in_en in_en2 jp kr cn_t cn nl fr | fieldformat val=tostring(val,"commas") | fieldformat metric_prefix=`numeral_metric_prefix(if(log(metric_prefix,10)>6,round(metric_prefix,-3),metric_prefix))` | fieldformat metric_symbol=`numeral_metric_symbol(if(log(metric_symbol,10)>6,round(metric_symbol,-3),metric_symbol))` | fieldformat binary_symbol=printf("% 10s",`numeral_binary_symbol(binary_symbol,2)`) | fieldformat en=`numeral_en(if(log(en,10)>6,round(en,-3),en))` | fieldformat es=`numeral_es(if(log(es,10)>6,round(es,-3),es))` | fieldformat pt=`numeral_pt(if(log(pt,10)>6,round(pt,-3),pt))` | fieldformat in_en=`numeral_in_en(if(log(in_en,10)>6,round(in_en,-3),in_en))` | fieldformat in_en2=`numeral_in_en2(if(log(in_en2,10)>6,round(in_en2,-3),in_en2))` | fieldformat jp=`numeral_jp(if(log(jp,10)>6,round(jp,-3),jp))` | fieldformat kr=`numeral_kr(if(log(kr,10)>6,round(kr,-3),kr))` | fieldformat cn_t=`numeral_cn_t(if(log(cn_t,10)>6,round(cn_t,-3),cn_t))` | fieldformat cn=`numeral_cn(if(log(cn,10)>6,round(cn,-3),cn))` | fieldformat nl=`numeral_nl(if(log(nl,10)>6,round(nl,-3),nl))` | fieldformat fr=`numeral_fr(if(log(fr,10)>6,round(fr,-3),fr))` The results of this search will look like the table in the top image of this article.
Hello, I have installed the SCOM app (version 430) on my Splunk Heavy Forwarder (903) The Windows SCOM infrastructure exists  of one managementgroup with 2 management servers. managementgr... See more...
Hello, I have installed the SCOM app (version 430) on my Splunk Heavy Forwarder (903) The Windows SCOM infrastructure exists  of one managementgroup with 2 management servers. managementgroup = SC-PROD consisting of 2 management SC-PRD1 and SC-PRD2 The reason for this, is when server 1 is down, data is still collected thru the second node. I have configured  the managemt servers in the Splunk SCOM app. I have only to option to connect to the management servers and not to a group. I connect with  a URL , with a Service acount that exist on the server My problem is that i get double data, from both managemtn servers. I am looking for a smart way to connect, like a cluster Has anybody experience and advise in this scenario? Any advise is appriceated Regards, Harry
Hello | index=fruits | transaction fruit_id | rex max_match=0 “using rex to get the Type” | eval TypeList=mvdedup(Type) | eval Typecount=mvcount(TypeList) | table fruit_id TypeList Typecount Typ... See more...
Hello | index=fruits | transaction fruit_id | rex max_match=0 “using rex to get the Type” | eval TypeList=mvdedup(Type) | eval Typecount=mvcount(TypeList) | table fruit_id TypeList Typecount Type   Fruit_id TypeList Typecount Type 1 Apple Banana Orange 3 Apple Banana Orange Banana Orange Apple Orange Apple   Expected Output : Fruit_id TypeList Typecount Type 1 Apple Banana Orange 3 Apple - 3 Banana - 2 Orange - 3   I couldn't find the count of individual values in multi-value field. Can someone help me?. Thanks in advance.
Installing the forwarder manually works fine, installing it automatically with the same user account fails with a 1603 error. Installer logs snippet:   MSI (s) (B8:FC) [09:22:23:304]: Note: 1:... See more...
Installing the forwarder manually works fine, installing it automatically with the same user account fails with a 1603 error. Installer logs snippet:   MSI (s) (B8:FC) [09:22:23:304]: Note: 1: 2203 2: C:\Windows\Installer\inprogressinstallinfo.ipi 3: -2147287038 MSI (s) (B8:FC) [09:22:23:304]: Note: 1: 2205 2: 3: LaunchCondition MSI (s) (B8:FC) [09:22:23:304]: Note: 1: 2228 2: 3: LaunchCondition 4: SELECT `Condition` FROM `LaunchCondition` MSI (s) (B8:FC) [09:22:23:304]: APPCOMPAT: [DetectVersionLaunchCondition] Failed to initialize pRecErr. MSI (s) (B8:FC) [09:22:23:304]: PROPERTY CHANGE: Adding ACTION property. Its value is 'INSTALL'. MSI (s) (B8:FC) [09:22:23:304]: Doing action: INSTALL MSI (s) (B8:FC) [09:22:23:304]: Note: 1: 2205 2: 3: ActionText Action start 9:22:23: INSTALL. MSI (s) (B8:FC) [09:22:23:320]: Running ExecuteSequence MSI (s) (B8:FC) [09:22:23:320]: Doing action: SetAllUsers MSI (s) (B8:FC) [09:22:23:320]: Note: 1: 2205 2: 3: ActionText MSI (s) (B8:EC) [09:22:23:320]: Invoking remote custom action. DLL: C:\Windows\Installer\MSI5F93.tmp, Entrypoint: SetAllUsersCA MSI (s) (B8:F8) [09:22:23:320]: Generating random cookie. MSI (s) (B8:F8) [09:22:23:320]: Created Custom Action Server with PID 976 (0x3D0). MSI (s) (B8:3C) [09:22:23:335]: Running as a service. MSI (s) (B8:3C) [09:22:23:335]: Hello, I'm your 64bit Impersonated custom action server. Action start 9:22:23: SetAllUsers. SetAllUsers: Debug: Num of subkeys found: 1. SetAllUsers: Info: Previously installed Splunk product is not found. SetAllUsers: Error: Failed SetAllUsers: 0x2. SetAllUsers: Info: Leave SetAllUsers: 0x80004005. CustomAction SetAllUsers returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox) Action ended 9:22:23: SetAllUsers. Return value 3. Action ended 9:22:23: INSTALL. Return value 3.
I have file.csv and I want to do an action, action="blocked" but it appears to me there is no result after searching so is there any a way to help me?
On August 16, 2022 Splunk published two security advisories. One (SVD-2022-0803) was published under Quarterly Security Patch  Updates on the Splunk Product Security page. The other (SVD-2022-0804) w... See more...
On August 16, 2022 Splunk published two security advisories. One (SVD-2022-0803) was published under Quarterly Security Patch  Updates on the Splunk Product Security page. The other (SVD-2022-0804) was published under Third-Party Bulletins on the Splunk Product Security page. Neither of these advisories were published under Critical Security Alerts on the Splunk Product Security page. Can you explain the process/criteria Splunk uses to determine when security advisories are published under Critical Security Alerts?  
Hey there, we have a large volume (about 500-600gb) of data coming in daily but about 200gb of this is a JSON wrapper from Amazon Firehose. The data essentially looks like this:   { "message": ... See more...
Hey there, we have a large volume (about 500-600gb) of data coming in daily but about 200gb of this is a JSON wrapper from Amazon Firehose. The data essentially looks like this:   { "message": "ACTUAL_DATA_WE_WANT", "logGroup": "/use1/prod/eks/primary/containers", "logStream": "fluent-bit/cross-services/settings-7dbb9dbdb4-qjz5b/settings-api/81d3685eaaeae0effab5931590784016ce75a8171ad7e3e76152e30bd732a739", "timestamp": 1675349068034 }   As you can see, ACTUAL_DATA_WE_WANT is what we need. This contains everything including timestamp and application information. The JSON wrapper is added by Firehose and makes up at least 250 bytes of every event. Is it possible to remove all of this unnecessary data so that we can save ingestion for more useful things? I have heard that the SEDCMD can do this but it is resource intensive and we ingest almost a billion events a day.
Hello everyone, I have a search in the following format: (index="index1" group=a) OR (index="index2" group=a).... Later on in the search I want to rename the field host to splunkname, but only t... See more...
Hello everyone, I have a search in the following format: (index="index1" group=a) OR (index="index2" group=a).... Later on in the search I want to rename the field host to splunkname, but only those found in events coming from the second "search". The problem ist that both "searches" return events with a field called host. When I tried this, it didnt work: (index="index1" group=a) OR (index="index2" group=a| rename host AS splunkname) How could I solve this?
I am a new user of Splunk enterprise and my organization has a company-wide subscription to the Basic eLearning package with 22 courses, quizes, and labs. If I do all of these correctly, will I theor... See more...
I am a new user of Splunk enterprise and my organization has a company-wide subscription to the Basic eLearning package with 22 courses, quizes, and labs. If I do all of these correctly, will I theoretically have the necessary knowledge to pass the entry-level basic user exam? 
Hi, I have a question if there is a possibility to use the APP Paessler PRTG Modular Input in a distributed indexer scenario. I can install the app on the SH, but how do I create the reference to t... See more...
Hi, I have a question if there is a possibility to use the APP Paessler PRTG Modular Input in a distributed indexer scenario. I can install the app on the SH, but how do I create the reference to the indexer cluster. I can only select the local index on the SH. Does that work with a heavy forwarder maybe?  Thanks, max