All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If you do not have a 'group' value per event, then you can simply create one by putting | streamstats c as group before the first stats mentioned by @yuanliu 
Thank you @bowesmana  . I'm also considering implementing the select value in JavaScript . Appreciate your time.
Hello Everyone I have Splunk Enterprise installed in a Centos 7 linux OS I have added csv data and i wish to build a dashboard however when attempting to add a background image i am getting the 5... See more...
Hello Everyone I have Splunk Enterprise installed in a Centos 7 linux OS I have added csv data and i wish to build a dashboard however when attempting to add a background image i am getting the 503 -service unavailable error When running the status start  command this is what i get : Splunk> Finding your faults, just like mom. Checking prerequisites... Checking http port [8000]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Checking critical directories... Done Checking indexes... Validated: _audit _configtracker _dsappevent _dsclient _dsphonehome _internal _introspection _metrics _metrics_rollup _telemetry _thefishbucket history keepereu main summary Done Checking filesystem compatibility... Done Checking conf files for problems... Done Checking default conf files for edits... Validating installed files against hashes from '/opt/splunk/splunk-9.2.0.1-d8ae995bf219-linux-2.6-x86_64-manifest' All installed files intact. Done All preliminary checks passed. Starting splunk server daemon (splunkd)... PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped with the embedded Python interpreter; must be set to "1" for increased security Done [ OK ] Waiting for web server at http://127.0.0.1:8000 to be available..................... When running the status command i get [root@localhost splunk]# /opt/splunk/bin/splunk status splunkd is running (PID: 2614). splunk helpers are running (PIDs: 2640 3079 3140 10111 10113 10144 10155 10172 10692 10808 11367 21735 24183). [root@localhost splunk]# What logs should i inspect to better understand why this is happening?    
This seems to be a bug but for me it started working on its own after like 12 hrs.
Can you please try appending below -  | makemv delim="," allowempty=t CmdArgAV   Please accept the solution and hit Karma, if this helps!
As you already experience, Splunk strongly disfavors join.  This is just natural as most noSQL do. So, you explained how many events these sources can give, and how many different client ID's.  What... See more...
As you already experience, Splunk strongly disfavors join.  This is just natural as most noSQL do. So, you explained how many events these sources can give, and how many different client ID's.  What you forget to tell us is what you mean by "to get the count/volume based on the client id".  If you only want to count events from each sourcetype by clientid, all you need to do is (index=a sourcetype=Cos) OR (index=a sourcetype=Ma) OR (index=a sourcetype=Ph) ``` you can also use index=a sourcetype IN (Cos, Ma, Ph) ``` | stats count by clientid, sourcetype (which is a copy of the SPL snippet but added a pipe (|) in front of stats to make syntax correct.) There is no join.  The above will not timeout even with millions of event. In other words, what does "compare" mean in "to compare the same in another application", and what does the word mean in "to compare whether these clients ids are present in  another application"? If you want to know which and how many sourcetypes (apps) each clientid appear in, all you need is to add the following: | stats sum(count) as total values(sourcetype) as apps dc(sourcetype) as app_count by clientid Put together, (index=a sourcetype=Cos) OR (index=a sourcetype=Ma) OR (index=a sourcetype=Ph) ``` you can also use index=a sourcetype IN (Cos, Ma, Ph) ``` | stats count by clientid, sourcetype | stats sum(count) as total values(sourcetype) as apps dc(sourcetype) as app_count by clientid Still no join.  Where do you get join to time out?
I cannot get a sense of this question. What is that data table at the beginning of this post supposed to be? Right before inputlookup, you have a stats command that reduces data fields to downtime... See more...
I cannot get a sense of this question. What is that data table at the beginning of this post supposed to be? Right before inputlookup, you have a stats command that reduces data fields to downtime and data.componentId.  I assume that everything above inputlook is working as expected.  If this is the case, please just post sample/mock values of downtime and data.componentId and ignore anything about app and input selection. (See below.) What fields (columns) are in this all_env_component.csv file? And how is this file useful to what you wanted in the end? What exact is it that you wanted in the end?  By this, I mean what does "4 multi select boxes" have to do with this question?  Your search does not use a single token.  This means that none of these selections should have any effect of results. In short, you need to post data input - you can post just sample/mock values downtime- data.componentId pairs; explain what is in that lookup file, provide some sample/mock values.  Then, explain what you are trying to do after that inputlookup, illustrate what your expected results look like from the sample/mock input values, and the logic between the input and desired results. These are the basis of an answerable question in a forum about data analytics.
@meetmshahthanks 'max_match=0' helped. but command keywords are separated by 'Enter'. is there any options to keep all words in one line?  
Hi, I have requirement as below, please could you review and suggest ? Need to pick up all client ids from application log called "Cos" (index=a sourcetype=Cos ) where distinct client ids are in 6M... See more...
Hi, I have requirement as below, please could you review and suggest ? Need to pick up all client ids from application log called "Cos" (index=a sourcetype=Cos ) where distinct client ids are in 6Millions. And, I want to compare whether these clients ids are present in  another application log called "Ma" (index=a sourcetype=Ma). And, I also want to compare the same in another application  called "Ph" (index=a sourcetype=Ph) Basically trying to get the count/volume based on the client id, which is common among the 3 application (Cos, Ma,Ph). The total events are in Millions and when i use join, the search job is getting auto-cancelled or getting terminated. (index=a sourcetype=Cos) OR (index=a sourcetype=Ma) OR (index=a sourcetype=Ph) stats count by clientid, sourcetype   Thanks, Selvam.
The solution becomes more obvious if I restate the problem like this: In addition to colors, you must have another field with four distinct values.  Let's call the additional field group, and give th... See more...
The solution becomes more obvious if I restate the problem like this: In addition to colors, you must have another field with four distinct values.  Let's call the additional field group, and give them values "a", "b", "c", and "d". colors group blue a blue a red a yellow b red b blue c red c blue c red d red d green d green d When data structure is clear, what you are asking is to Find values of colors that appear more than once with each group value. Count how many distinct values of group for each of duplicated values of colors. Hence,   | stats count by colors group | where count > 1 | stats dc(group) as duplicate_count by colors   Here is a data emulation you can play with and compare with real data   | makeresults format=csv data="colors,group blue,a blue,a red,a yellow,b red,b blue,c red,c blue,c red,d red,d green,d green,d" ``` data emulation above ```   String the two together, you get colors duplicate_count blue 2 green 1 red 1
@gcusello @KendallW  I receive the log via UDP from the heavy forwarder connected to the indexer. After setting the sourcetype to temp in the heavy forwarder (inputs), the sourcetype is set to overr... See more...
@gcusello @KendallW  I receive the log via UDP from the heavy forwarder connected to the indexer. After setting the sourcetype to temp in the heavy forwarder (inputs), the sourcetype is set to override according to the host and regular expression. Is it correct to extract timestamps in the heavy forwarder props? No matter how many times I apply the settings you mentioned, it doesn't work. 
When I doing splunkforwarder version upgrade to 9.X which always failed due to below error - Migration information is being logged to '/opt/splunkforwarder/var/log/splunk/migration.log.2024-03-25.18... See more...
When I doing splunkforwarder version upgrade to 9.X which always failed due to below error - Migration information is being logged to '/opt/splunkforwarder/var/log/splunk/migration.log.2024-03-25.18-09-26' -- Error calling execve(): No such file or directory Error launching command: No such file or directory   As per the discussion on the link:   https://community.splunk.com/t5/Installation/Upgrading-Universal-Forwarder-8-x-x-to-9-x-x-does-not-work/m-p/665668  who said we have to enable the tty env option on the docker runtime to successfully bring up Splunkforward9.X Indeed I added the tty config on my docker compose file and it works. But I would say it is werse, bad workaround way to bring up splunkforwarder9.X.  Why the forwarder9.X version force ask for the tty terminal env to run up?  Can we remove this restriction? In many case, we have to bring up splunkforwarder instance within a background program but not in a terminal, and for some case we have to use process manager to control splunkforwarder start resume... Anyway, can we remove the tty restriction for newer splunkforwarder9.X just like what it did on 8.X and 7.X  
Thank you for answer! I tried specifying and applying all the regular expressions as you answered, but it doesn't work. It's difficult...
Hello, Thank you for your answer. I already tried it but it doesn't work. I'll try it one more time!
Hi, I am using Splunk Dashboard with SimpleXML formatting. This is my Current Code for my Dashboard. * Query is masked * The Structure is defined as it is.     <row> <panel> <html d... See more...
Hi, I am using Splunk Dashboard with SimpleXML formatting. This is my Current Code for my Dashboard. * Query is masked * The Structure is defined as it is.     <row> <panel> <html depends="$alwaysHideCSS$"> <style> #table_ref_base{ width:50% !important; float:left !important; height: 800px !important; } #table_ref_red{ width:50% !important; float:right !important; height: 400px !important; } #table_ref_org{ width:50% !important; float:right !important; height: 400px !important; } </style> </html> </panel> </row> <row> <panel id="table_ref_base"> <table> <title>Signals from Week $tk_chosen_start_wk$ ~ Week $tk_chosen_end_wk$</title> <search id="search_ref_base"> <query></query> <earliest>$tk_search_start_week$</earliest> <latest>$tk_search_end_week$</latest> </search> <option name="count">30</option> <option name="dataOverlayMode">none</option> <option name="drilldown">row</option> <option name="percentagesRow">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> <panel id="table_ref_red"> <table> <title> (Red) - Critical/Severe Detected (Division_HQ/PG2/Criteria/Value)</title> <search base="search_ref_base"> <query></query> </search> <option name="count">5</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel id="table_ref_org"> <table> <title>🟠 (Orange) - High/Warning Detected (Division_HQ/PG2/Criteria/Value)</title> <search base="search_ref_base"> <query></query> </search> <option name="count">5</option> <option name="refresh.display">progressbar</option> </table> </panel> </row>       However, my dashboard shows up with this picture below. I thought by defining 800px on the left panel and 400px on both right panel would end up like the preferred dashboard page as above(right), but it gave me a result(left): Here is also the result of my current Dashboard.  As you can see, It also returns me a needless white space below:   Thanks for your help!   Sincerely,  Chung
Thank you I have added double quotes in my lookup for FailureMsg field. Could you please help on how we can write lookup query to search for FailureMsg in _raw ?
Hi You should use “ with your field value name. Otherwise splunk think that your value is field name. r. Ismo
As already mentioned all those numbers are from 0 to something not 1 to something!
7 is a non-standard day number.  Try 0 6,12,20,22 * * 0,6
Another update: my csv lookup in this example has only 2 rows, but it could have many more. Also I am not planning to use other fields Product, Feature but just need FailureMsg