All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Experts ,  Is there any way we can have different colors on Bar chart based on X-Axis values , below is code to my bar chart, i want to have different color bars for different countries (X-Axis ... See more...
Hi Experts ,  Is there any way we can have different colors on Bar chart based on X-Axis values , below is code to my bar chart, i want to have different color bars for different countries (X-Axis values). <search base="base_search"> <query>|search Country=$Country$|stats dc(conversation-id) by Country</query> </search> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisY.abbreviation">auto</option> <option name="charting.axisY.minimumNumber">1</option> <option name="charting.axisY.scale">log</option> <option name="charting.axisY2.abbreviation">auto</option> <option name="charting.axisY2.enabled">1</option> <option name="charting.axisY2.scale">log</option> <option name="charting.chart">column</option> <option name="charting.chart.overlayFields">count</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.drilldown">none</option> <option name="charting.fieldColors">{"CD";10": 0xFF0000, "IND": 0xFF9900, "ZA":0x008000}</option> <option name="charting.legend.placement">right</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> <option name="trellis.size">large</option> <option name="trellis.splitBy">Country</option> </chart>    
Is there a way to one-click reset all inputs back to their default values in dashboard studio? I have 7 different inputs (dropdowns and text) that are being used as filter criteria for a table. I wou... See more...
Is there a way to one-click reset all inputs back to their default values in dashboard studio? I have 7 different inputs (dropdowns and text) that are being used as filter criteria for a table. I would like a way to click "something" and have them all set back to their respective default values. I have done something similar for another dashboard that resets tokens that have been set based on clicked rows in charts/tables (just using a single value panel and setting all of the tokens), but I don't see a way to do this for inputs. Reloading the dashboard doesn't set them back to default either. It requires exiting the dashboard and relaunching. Thanks Craig
Hello from Splunk Data Manager Team, We are excited to announce the preview of the new Kubernetes Navigator for Splunk Observability Cloud. Before you search through previous conversations looking f... See more...
Hello from Splunk Data Manager Team, We are excited to announce the preview of the new Kubernetes Navigator for Splunk Observability Cloud. Before you search through previous conversations looking for assistance, we want to provide you with some basic information and quick resources. Want to access product docs? The Infrastructure Monitoring User Manual  offers detailed guidance on the interfaces provided by the new Kubernetes Navigator Want to request more features? Add your ideas and vote on other ideas in the Infrastructure Monitoring category via the Splunk Ideas Portal Please reply to this thread for any questions or get extra help!
Numeral system macros for Splunk Examples of Single Value panel and Table. Hello, Just an announcement. I have created macros that converts a number into a string with a language specific expre... See more...
Numeral system macros for Splunk Examples of Single Value panel and Table. Hello, Just an announcement. I have created macros that converts a number into a string with a language specific expressing (long and short scales, or neither). It was released on splunkbase. https://splunkbase.splunk.com/app/6595 Language-specific expressions may be useful when displaying huge numbers on a dashboard to make their size easier to understand. Or it may help us to mutually understand how numbers are expressed in other languages. Ref.: About long and short scales (numeration system). https://en.wikipedia.org/wiki/Long_and_short_scales Example of Use: Sample for English speakers | makeresults | eval val=1234567890123, val=`numeral_en(val)` | table val 1 trillion 234 billion 567 million 890 thousand 123 Provided macros: numeral_en(1) : Short Scale for English speaker numeral_metric_prefix(1) : Metric prefix. kilo, mega, giga, tera, peta, exa, zetta, yotta numeral_metric_symbol(1) : Metric symbol. K, M, G, T, P, E, Z, Y numeral_jp(1) : 万進法 for Japanese speaker. 千, 万, 億, 兆 numeral_kr(1) : for Korean speaker. 千, 萬, 億, 兆 numeral_cn_t(1) : Chinese with Traditional Chinese characters. 千, 萬, 億, 兆 numeral_cn(1) : Chinese with Simplified Chinese characters. 千, 万, 亿, 兆 numeral_in_en(1) : for India, South Asia English. thousand, lakh, crore, lakh crore numeral_in_en2(1) : for India, South Asia English. thousand, lakh, crore, arab numeral_nl(1) : Long Scale for Nederland. duizend, miljoen, miljard, biljoen numeral_fr(1) : Long Scale for French. mille, million, milliard, billion numeral_es(1) : Long Scale for Spanish speaker. mil, millón, millardo, billón numeral_pt(1) : Long Scale for Portuguese speaker. mil, milhão, bilhão, trilhão Followings also provided since v1.1.1 numeral_binary_symbol(1) : Binary symbol. KiB, MiB, GiB, TiB, PiB, EiB, ZiB, YiB, RiB, QiB numeral_binary_symbol(2) : Binary symbol with arg for rounding digits.     See Next article "How to convert large bytes to human readable units (e.g. Kib, MiB, GiB)" More details See Details tab on https://splunkbase.splunk.com/app/6595 Install this add-on into your search heads. Advanced examples Sample usage for using all provided macros. With rounding lowest 3 digit if over 6 digit. | makeresults count=35 | streamstats count as digit | eval val=pow(10,digit-1), val=val+random()%val.".".printf("%02d",random()%100) | foreach metric_prefix metric_symbol binary_symbol en es pt in_en in_en2 jp kr cn_t cn nl fr [eval <<FIELD>>=val] | table digit val metric_prefix metric_symbol binary_symbol en es pt in_en in_en2 jp kr cn_t cn nl fr | fieldformat val=tostring(val,"commas") | fieldformat metric_prefix=`numeral_metric_prefix(if(log(metric_prefix,10)>6,round(metric_prefix,-3),metric_prefix))` | fieldformat metric_symbol=`numeral_metric_symbol(if(log(metric_symbol,10)>6,round(metric_symbol,-3),metric_symbol))` | fieldformat binary_symbol=printf("% 10s",`numeral_binary_symbol(binary_symbol,2)`) | fieldformat en=`numeral_en(if(log(en,10)>6,round(en,-3),en))` | fieldformat es=`numeral_es(if(log(es,10)>6,round(es,-3),es))` | fieldformat pt=`numeral_pt(if(log(pt,10)>6,round(pt,-3),pt))` | fieldformat in_en=`numeral_in_en(if(log(in_en,10)>6,round(in_en,-3),in_en))` | fieldformat in_en2=`numeral_in_en2(if(log(in_en2,10)>6,round(in_en2,-3),in_en2))` | fieldformat jp=`numeral_jp(if(log(jp,10)>6,round(jp,-3),jp))` | fieldformat kr=`numeral_kr(if(log(kr,10)>6,round(kr,-3),kr))` | fieldformat cn_t=`numeral_cn_t(if(log(cn_t,10)>6,round(cn_t,-3),cn_t))` | fieldformat cn=`numeral_cn(if(log(cn,10)>6,round(cn,-3),cn))` | fieldformat nl=`numeral_nl(if(log(nl,10)>6,round(nl,-3),nl))` | fieldformat fr=`numeral_fr(if(log(fr,10)>6,round(fr,-3),fr))` The results of this search will look like the table in the top image of this article.
Hello, I have installed the SCOM app (version 430) on my Splunk Heavy Forwarder (903) The Windows SCOM infrastructure exists  of one managementgroup with 2 management servers. managementgr... See more...
Hello, I have installed the SCOM app (version 430) on my Splunk Heavy Forwarder (903) The Windows SCOM infrastructure exists  of one managementgroup with 2 management servers. managementgroup = SC-PROD consisting of 2 management SC-PRD1 and SC-PRD2 The reason for this, is when server 1 is down, data is still collected thru the second node. I have configured  the managemt servers in the Splunk SCOM app. I have only to option to connect to the management servers and not to a group. I connect with  a URL , with a Service acount that exist on the server My problem is that i get double data, from both managemtn servers. I am looking for a smart way to connect, like a cluster Has anybody experience and advise in this scenario? Any advise is appriceated Regards, Harry
Hello | index=fruits | transaction fruit_id | rex max_match=0 “using rex to get the Type” | eval TypeList=mvdedup(Type) | eval Typecount=mvcount(TypeList) | table fruit_id TypeList Typecount Typ... See more...
Hello | index=fruits | transaction fruit_id | rex max_match=0 “using rex to get the Type” | eval TypeList=mvdedup(Type) | eval Typecount=mvcount(TypeList) | table fruit_id TypeList Typecount Type   Fruit_id TypeList Typecount Type 1 Apple Banana Orange 3 Apple Banana Orange Banana Orange Apple Orange Apple   Expected Output : Fruit_id TypeList Typecount Type 1 Apple Banana Orange 3 Apple - 3 Banana - 2 Orange - 3   I couldn't find the count of individual values in multi-value field. Can someone help me?. Thanks in advance.
Installing the forwarder manually works fine, installing it automatically with the same user account fails with a 1603 error. Installer logs snippet:   MSI (s) (B8:FC) [09:22:23:304]: Note: 1:... See more...
Installing the forwarder manually works fine, installing it automatically with the same user account fails with a 1603 error. Installer logs snippet:   MSI (s) (B8:FC) [09:22:23:304]: Note: 1: 2203 2: C:\Windows\Installer\inprogressinstallinfo.ipi 3: -2147287038 MSI (s) (B8:FC) [09:22:23:304]: Note: 1: 2205 2: 3: LaunchCondition MSI (s) (B8:FC) [09:22:23:304]: Note: 1: 2228 2: 3: LaunchCondition 4: SELECT `Condition` FROM `LaunchCondition` MSI (s) (B8:FC) [09:22:23:304]: APPCOMPAT: [DetectVersionLaunchCondition] Failed to initialize pRecErr. MSI (s) (B8:FC) [09:22:23:304]: PROPERTY CHANGE: Adding ACTION property. Its value is 'INSTALL'. MSI (s) (B8:FC) [09:22:23:304]: Doing action: INSTALL MSI (s) (B8:FC) [09:22:23:304]: Note: 1: 2205 2: 3: ActionText Action start 9:22:23: INSTALL. MSI (s) (B8:FC) [09:22:23:320]: Running ExecuteSequence MSI (s) (B8:FC) [09:22:23:320]: Doing action: SetAllUsers MSI (s) (B8:FC) [09:22:23:320]: Note: 1: 2205 2: 3: ActionText MSI (s) (B8:EC) [09:22:23:320]: Invoking remote custom action. DLL: C:\Windows\Installer\MSI5F93.tmp, Entrypoint: SetAllUsersCA MSI (s) (B8:F8) [09:22:23:320]: Generating random cookie. MSI (s) (B8:F8) [09:22:23:320]: Created Custom Action Server with PID 976 (0x3D0). MSI (s) (B8:3C) [09:22:23:335]: Running as a service. MSI (s) (B8:3C) [09:22:23:335]: Hello, I'm your 64bit Impersonated custom action server. Action start 9:22:23: SetAllUsers. SetAllUsers: Debug: Num of subkeys found: 1. SetAllUsers: Info: Previously installed Splunk product is not found. SetAllUsers: Error: Failed SetAllUsers: 0x2. SetAllUsers: Info: Leave SetAllUsers: 0x80004005. CustomAction SetAllUsers returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox) Action ended 9:22:23: SetAllUsers. Return value 3. Action ended 9:22:23: INSTALL. Return value 3.
I have file.csv and I want to do an action, action="blocked" but it appears to me there is no result after searching so is there any a way to help me?
On August 16, 2022 Splunk published two security advisories. One (SVD-2022-0803) was published under Quarterly Security Patch  Updates on the Splunk Product Security page. The other (SVD-2022-0804) w... See more...
On August 16, 2022 Splunk published two security advisories. One (SVD-2022-0803) was published under Quarterly Security Patch  Updates on the Splunk Product Security page. The other (SVD-2022-0804) was published under Third-Party Bulletins on the Splunk Product Security page. Neither of these advisories were published under Critical Security Alerts on the Splunk Product Security page. Can you explain the process/criteria Splunk uses to determine when security advisories are published under Critical Security Alerts?  
Hey there, we have a large volume (about 500-600gb) of data coming in daily but about 200gb of this is a JSON wrapper from Amazon Firehose. The data essentially looks like this:   { "message": ... See more...
Hey there, we have a large volume (about 500-600gb) of data coming in daily but about 200gb of this is a JSON wrapper from Amazon Firehose. The data essentially looks like this:   { "message": "ACTUAL_DATA_WE_WANT", "logGroup": "/use1/prod/eks/primary/containers", "logStream": "fluent-bit/cross-services/settings-7dbb9dbdb4-qjz5b/settings-api/81d3685eaaeae0effab5931590784016ce75a8171ad7e3e76152e30bd732a739", "timestamp": 1675349068034 }   As you can see, ACTUAL_DATA_WE_WANT is what we need. This contains everything including timestamp and application information. The JSON wrapper is added by Firehose and makes up at least 250 bytes of every event. Is it possible to remove all of this unnecessary data so that we can save ingestion for more useful things? I have heard that the SEDCMD can do this but it is resource intensive and we ingest almost a billion events a day.
Hello everyone, I have a search in the following format: (index="index1" group=a) OR (index="index2" group=a).... Later on in the search I want to rename the field host to splunkname, but only t... See more...
Hello everyone, I have a search in the following format: (index="index1" group=a) OR (index="index2" group=a).... Later on in the search I want to rename the field host to splunkname, but only those found in events coming from the second "search". The problem ist that both "searches" return events with a field called host. When I tried this, it didnt work: (index="index1" group=a) OR (index="index2" group=a| rename host AS splunkname) How could I solve this?
Hi, I have a question if there is a possibility to use the APP Paessler PRTG Modular Input in a distributed indexer scenario. I can install the app on the SH, but how do I create the reference to t... See more...
Hi, I have a question if there is a possibility to use the APP Paessler PRTG Modular Input in a distributed indexer scenario. I can install the app on the SH, but how do I create the reference to the indexer cluster. I can only select the local index on the SH. Does that work with a heavy forwarder maybe?  Thanks, max
Utilizing Splunk cloud, I have created quite a few notable events and correlation searches that function normally but I cant figure out why several alarms show as informational under incident review ... See more...
Utilizing Splunk cloud, I have created quite a few notable events and correlation searches that function normally but I cant figure out why several alarms show as informational under incident review as they are intended to be a Medium priority.   the query does not currently contain anything to change the urgency.
The code below does create a table/ scatter plot showing the warnings per hour of day.  All warnings between 00:00 and 00:59 will be counted/ listed in date_hour 0, warnings from 01:00 until 01:59 ... See more...
The code below does create a table/ scatter plot showing the warnings per hour of day.  All warnings between 00:00 and 00:59 will be counted/ listed in date_hour 0, warnings from 01:00 until 01:59 will be counted in date_hour 1, .... If the time range is set across multiple days I still have the date_hours 0...23 all the warnings will be added independent of the date.   index="..." sourcetype="..." | strcat opc "_" frame_num "_" elem_id uniqueID | search status_1="DRW 1*" | stats count as warning by date_hour, uniqueID | table uniqueID date_hour warning   For the kind of evaluation we're doing we would need to have shorter counting intervals than the one given by date_hour, e.g. 20 minutes. The question is: how to update the code to get counting intervals smaller than one hour??? Below I managed to reduce the time interval to 20 minutes.     index="..." sourcetype="..." | strcat opc "_" frame_num "_" elem_id uniqueID | search status_1="DRW 1*" | bin _time span=20m as interval | stats count as warning by interval, uniqueID | table uniqueID interval warning    But: the date is still in there so I have a 20 min counting interval one day after the other. And the interval string is no more human readyble in the table.  Any help is appreciated.
Hi, I need to include EUM_SYNTHETIC_SESSION_DETAILS  URL in the alert. What is the Variable name for this value? Actually, EUM_SYNTHETIC_SESSION_DETAILS  URL gives the latest Synthetic Job sessio... See more...
Hi, I need to include EUM_SYNTHETIC_SESSION_DETAILS  URL in the alert. What is the Variable name for this value? Actually, EUM_SYNTHETIC_SESSION_DETAILS  URL gives the latest Synthetic Job session details.
Hello! I have a dashboard I have scheduled to be exported to PDF on the first day of every month at 1am. The problem is that, although the queries and values presented on the dashboard are correct,... See more...
Hello! I have a dashboard I have scheduled to be exported to PDF on the first day of every month at 1am. The problem is that, although the queries and values presented on the dashboard are correct, most of the times the resulting PDF has values set to 0. For example, let's say i have 4 single value panels (A, B, C and D) on my dashboard and that those 4 panels have the following values: A ->10 B -> 45 C -> 94 D -> 2 When I export it to PDF some of those panels appear with value 0 in the PDF. Sometimes one of them, sometimes two of them, sometimes all of them...  And sometimes all of them appear with the correct values. It's random. The same happens with charts that are also on the dashboard, they will randomly appear in the exported PDF with "No results found.". This is less frequent then the single value panels situation but I assume the source of the problem is the same.   Any idea what the problem might be?   Thank you!
Hello, I wanted a EVAL statement which manually adds a specified time may be "00:00:00" for the event containing only date component in them. Example of the file: (psv format) Poojitha Vasanth|... See more...
Hello, I wanted a EVAL statement which manually adds a specified time may be "00:00:00" for the event containing only date component in them. Example of the file: (psv format) Poojitha Vasanth|21644|669194|Poojitha Vasanth|02/19/18|PRE-CLINIC VISIT| Current sourcetype: [sample:xx:audit:psv] EVAL-event_dt_tm = date FIELD_NAMES = "prsnl_name","prsnl_alias","person_alias","person_name","date","event_name" TIMESTAMP_FIELDS = "date" And, I have modified it to. EVAL-time = "00:00:00" EVAL-event_dt_tm = date.time FIELD_NAMES = "prsnl_name","prsnl_alias","person_alias","person_name","date","event_name" TIMESTAMP_FIELDS = "date","time"   Even after this change, I am getting the ingested date and time and the actual log time. Could anyone please let me know where I have gone wrong?
I've tried to configure some reports to be send via email. I created a report which runs on a schedule an then send the report via mail. I receive an error like this: ERROR ScriptRunner [26364 Aler... See more...
I've tried to configure some reports to be send via email. I created a report which runs on a schedule an then send the report via mail. I receive an error like this: ERROR ScriptRunner [26364 AlertNotifierWorker-3] - stderr from 'C:\Program Files\[..]\sendemail.py "results_link=https://sea:443/app/search/@go?sid=scheduler__myuser__search__RMD5ca1c47b4433f8dbe_at_1675331100_258_5A6150E4-7C97-409F-AC0E-5BC487885B82" "ssname=xxx: myreport" "graceful=True" "trigger_time=1675331129" results_file="C:\Program Files\Splunk\var\run\splunk\dispatch\scheduler__myuser__search__RMD5ca1c47b4433f8dbe_at_1675331100_258_5A6150E4-7C97-409F-AC0E-5BC487885B82\results.csv.gz" "is_stream_malert=False"': ERROR:root:[WinError 10061] Es konnte keine Verbindung hergestellt werden, da der Zielcomputer die Verbindung verweigerte while sending mail to: mail@mail.org I thougt it might be the smtp-gateway but sending mails via | sendemail command works fine.  Also some, but very few, reports go through. Also I checked wether sendmail.py tried to open a connection to the smtp-server but it seems the error comes from sendemail.py trying to open the url in results_link. And that is the point where I do not know what to look for anymore.
I have the following search which returns a table of all hostnames and operating systems. | inputlookup hosts.csv | search OS="*server*" | table hostname, OS I would like to add a checkbox to e... See more...
I have the following search which returns a table of all hostnames and operating systems. | inputlookup hosts.csv | search OS="*server*" | table hostname, OS I would like to add a checkbox to exclude Windows Server 2008 builds. This is what I have so far: <row> <panel> <input type="checkbox" token="checkbox" searchWhenChanged="true"> <label></label> <choice value="Windows Server 2008*">Exclude Server 2008</choice> <change> <condition match="$checkbox$==&quot;Enabled&quot;"> <set token="setToken">1</set> </condition> <condition> <unset token="setToken"></unset> </condition> </change> </input> </panel> </row>   New panel to show server builds depending on the checkbox: <query> | inputlookup hosts.csv | search OS="*server*" AND OS!="$checkbox$" | stats count as total <query> This only works when the checkbox is selected and correctly excludes the 2008 builds from the search, but doesn`t display anything when the checkbox is unselected. I would like to display all devices when the  checkbox is unselected.
I am having 2 index - abc - FieldA, E, F bcz - Field B, C, D. Where I want to return D, C and F where value from field E to match with B. I am getting the required output but not able to get F values... See more...
I am having 2 index - abc - FieldA, E, F bcz - Field B, C, D. Where I want to return D, C and F where value from field E to match with B. I am getting the required output but not able to get F values. This is my query: index=abc fieldA="<>" | rex field=_raw .......FieldB .... FieldC.. Field D | search [ index=bcz FieldF="<>" | rename FieldE as FieldB | fields FieldB] | stats count as Total by _time, FieldD, FieldC, FieldF | where FieldD="<>"