All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

How should I format the lookup definition so that it takes both CIDR match and indivisual IP match What I mean to say is if I go to advance settings and change the match criteria to CIDR(IP) its not... See more...
How should I format the lookup definition so that it takes both CIDR match and indivisual IP match What I mean to say is if I go to advance settings and change the match criteria to CIDR(IP) its not matching the elements where the IP is single IP and not a CIDR 
Hi, I have two separate searches that are working independently (expected count, actual count).  I want to combine the searches to get a percentage for actual count to expected count; however append... See more...
Hi, I have two separate searches that are working independently (expected count, actual count).  I want to combine the searches to get a percentage for actual count to expected count; however append, appendcols, and other ways to add the searches together have so far not worked for me.  Curious if there's a better way to use stats, eval, transaction commands to achieve the combination of these searches. The end goal is to provide a visualization to understand if there's an issue when the actual count does not match the expected count - so open to suggestions on better ways to achieve that goal.   Search 1 (counting all records that are sent through producer class not part of refresh process): index=index | search ("ProducerClass" AND "*Sending message:*") NOT "*REFRESH*" | stats count as actual_count Search 2 (sum of record counts on files processed through opportunity class):  index=index | search "OpportunityClass" AND "Processing file: file_name" | rex field=_raw "Processing file: file_name with (?<record_count>[^\s]+) records" | stats sum(record_count) as expected_count I have tried append like this and it has not worked: index=index | search ("ProducerClass" AND "*Sending message:*" ) NOT "*REFRESH*" | stats count as actual_count | append [ search index=index "OpportunityClass" AND "Processing file: " | rex field=_raw "Processing file: file_name with (?<record_count>[^\s]+) records" | stats sum(record_count) as expected_count] | eval percent =expected_count/actual_count * 100 appendcols similarly did not work ("Aborting Long Running Search").  Assuming I am incorrectly understanding how I am combining these searches and it is causing issues when using append type commands.  Using an OR on the searches works, but unsure how to use other commands to group the results properly after: index=index | search (("ProducerClass" AND "*Sending message:*" ) NOT "*REFRESH*") OR ("OpportunityClass" AND "Processing file: ") | ...
Unless the lookup is defined with the CIDR, WILDCARD, or case-insensitive option, it will do exact string matching.  Even with those options, there is no way to get "1.2.3.4" to match "['1.2.3.4', '2... See more...
Unless the lookup is defined with the CIDR, WILDCARD, or case-insensitive option, it will do exact string matching.  Even with those options, there is no way to get "1.2.3.4" to match "['1.2.3.4', '2.3.5.0/24']".  The multiple IP addresses in each line should be put on separate lines. IP Name 1.2.3.4 name1 2.3.5.0/24 name1 1.2.3.4 name2 6.7.8.9/31 name2 4.5.6.7 name2 1.1.1.1 name2 3.3.3.3/31 name3 4.4.4.4 name3   Note that the IP field is ambiguous because both name1 and name2 share an IP address.  The lookup command will return only the first matching IP.
1. Check out the Workload Management feature.  https://docs.splunk.com/Documentation/SplunkCloud/9.0.2305/Admin/WorkloadManagement 2. That's about as much art as it is science.  The Search Manual ha... See more...
1. Check out the Workload Management feature.  https://docs.splunk.com/Documentation/SplunkCloud/9.0.2305/Admin/WorkloadManagement 2. That's about as much art as it is science.  The Search Manual has a chapter on it that should get you started.  https://docs.splunk.com/Documentation/Splunk/latest/Search/Aboutoptimization
Your timeframes could be different?
No not confusing the two.. I'm well aware of the differences. My scenario is that I have about a hundred devices all sending syslog data between two receivers in two sites currently. Which is then pi... See more...
No not confusing the two.. I'm well aware of the differences. My scenario is that I have about a hundred devices all sending syslog data between two receivers in two sites currently. Which is then picked up by two UF's and then forwarded to Splunk Cloud. When those two sites get rolled up together into a single colo I'll need to combine them (for lack of better words). Hence the load balancing.  HA would be fine, if I could determine if both UF and rsyslog could be operated in a high availability setting. Pretty sure the former is possible.. the latter though not from what I understand.
To get it to work, I had to put log_level="$tokLogLevel2$ in eval. <query>index=_internal sourcetype=splunkd" log_level="$tokLogLevel1$ | eval log_level="$tokLogLevel1$ | timechart count by compone... See more...
To get it to work, I had to put log_level="$tokLogLevel2$ in eval. <query>index=_internal sourcetype=splunkd" log_level="$tokLogLevel1$ | eval log_level="$tokLogLevel1$ | timechart count by component | fields - "$tokSubmit1$"</query> Also, if someone tries to access a shared dashboard link that somebody worked, the submit button won't work.  In order to fix that, you have to unset $form.tokLinkSubmit1$ under each <change>. <change> <!-- Any input changes reset the Search to wait for Submit button --> <unset token="tokSubmit1"></unset> <unset token="form.tokLinkSubmit1"></unset> </change>  
Hi all, We are seeing a scenario where there are a lot of unoptimised searches, dashboards etc which when run are exhausting our CPU on indexers. If some users run resource intensive adhoc searches/... See more...
Hi all, We are seeing a scenario where there are a lot of unoptimised searches, dashboards etc which when run are exhausting our CPU on indexers. If some users run resource intensive adhoc searches/dashboards etc simultaneously, this is becoming a problem as so many searches running together resulting in 'server busy' error at indexer.   1. Is there any way we can throttle CPU/memory usage per user/role/searches? 2. Are there any documents on optimising searches for better performance and less resource usage?
@ITWhisperer thanks work perfectly. is there any way to show resp count numbers like this: 10,1K, 2M, …?
Hello Everyone, Recently I have come across a very strange issue while adding a panel query to a dashboard. Following query working very fine in search but as soon as I add it to dashboard  it says "... See more...
Hello Everyone, Recently I have come across a very strange issue while adding a panel query to a dashboard. Following query working very fine in search but as soon as I add it to dashboard  it says "No results found". Not sure why ? index=app_pl  "com. thehartford.pl.model.exception. Csc ServiceException: null at com.thehartford.pl. rest.UserProfileController.buildUserProfile" | bin span=1h _time I table * I stats count as Failure by _time
I have a lookup table with 2 fields IP and Name IP Name ['1.2.3.4', '2.3.5.0/24'] -> name1 ['1.2.3.4',.6.7.8.9/31, 4.5.6.7,1.1.1.1] -> name2 [3.3.3.3/31, 4.4.4.4] -> name3   I have a list of IP... See more...
I have a lookup table with 2 fields IP and Name IP Name ['1.2.3.4', '2.3.5.0/24'] -> name1 ['1.2.3.4',.6.7.8.9/31, 4.5.6.7,1.1.1.1] -> name2 [3.3.3.3/31, 4.4.4.4] -> name3   I have a list of IPs like "1.2.3.4, 2.3.5.1"   This should give me result of lookup table names where the IPs are present. So in the above example result would be as the "1.2.3.4, 2.3.5.1" are present in first 2 rows. name1 name2  
Try this Dashboard we do use.   Change this (2 places) host="your-license server"   to name of your license server.  If you have only one server or run the code on the license server, just remove ... See more...
Try this Dashboard we do use.   Change this (2 places) host="your-license server"   to name of your license server.  If you have only one server or run the code on the license server, just remove the host="your-license server"  complete.  It will show on line red line that is your license limit and one bar graf that shows GB license use pr day.  To get MB, just remove one /1024 <form version="1.1" theme="dark"> <label>Used License</label> <fieldset submitButton="false"> <input type="time"> <label></label> <default> <earliest>-3mon</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <chart> <search> <query> index=_telemetry host="your-license server" source=*license_usage_summary.log* type="RolloverSummary" | eval _time=_time - 86400 | bin _time span=1d | stats latest(b) AS b by slave, pool, _time | timechart span=1d sum(b) AS "volume" fixedrange=false | join type=outer _time [ search index=_telemetry host="your-license server" source=*license_usage_summary.log* type="RolloverSummary" | eval _time=_time - 86400 | bin _time span=1d | dedup _time stack | stats sum(stacksz) AS "Limit" by _time] | fields - _timediff | foreach "*" [ eval &lt;&lt;FIELD&gt;&gt;=round('&lt;&lt;FIELD&gt;&gt;'/1024/1024/1024, 3)]</query> </search> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisY.minimumNumber">0</option> <option name="charting.chart">column</option> <option name="charting.chart.overlayFields">"Limit"</option> <option name="charting.drilldown">all</option> <option name="charting.fieldDashStyles">{"Limit":"shortDash"}</option> <option name="charting.seriesColors">[0x006D9C,0xFF0000]</option> <option name="charting.lineWidth">4</option> <option name="height">800</option> </chart> </panel> </row> </form>  
I am trying to create a search that will generate a report showing host by event count in the last hour and also the average 7 day hourly event count per host. So far i have the below search that ... See more...
I am trying to create a search that will generate a report showing host by event count in the last hour and also the average 7 day hourly event count per host. So far i have the below search that shows host by event count over the last hour - but i am struggling to get a column added showing the weekly hourly average?   | tstats count where index=* by host, index, sourcetype | addtotals | sort -Total | fields - Total | rename count as events_latest_hour   Any help on how i get a column added showing the 7 day hourly average for event count ?
  I have 3 panels for dropdown menu. if A is selected  panel 1 shows Search A panel 2 shows Title and the link to URL panel 3 shows Another Search of its own (if "drop d... See more...
  I have 3 panels for dropdown menu. if A is selected  panel 1 shows Search A panel 2 shows Title and the link to URL panel 3 shows Another Search of its own (if "drop down" is selected A) if B is selected Panel 1 shows Search B  Panel 2 Disappear Panel 3 Disappear if C is selected Panel 1 shows Search C Panel 2 Disappear  Panel 3 Disappear if D is selected Panel 1 shows Search D Panel 2 Disappear  Panel 3 Disappear   <input type="dropdown" token="tokenSearchOption1" searchWhenChanged="true"> <label>Sources</label> <choice value="A">A</choice> <choice value="B">B</choice> <choice value="C">C</choice> <choice value="D">D</choice> <change> <condition value="A"> <set token="tokenSearchQuery"> index= search query A</set> </condition> <condition value="B"> <set token="tokenSearchQuery">index= search query B</set> </condition> <condition value="C"> <set token="tokenSearchQuery">index=search query C</set> </condition> <condition value="D"> <set token="tokenSearchQuery">index= search query D</set> </condition> </change> <initialValue>"A"</initialValue> </input> </panel> </row> <row> <panel id="URL test"> <title>Title URL</title> <html> <!-- <style> .dashboard-row Title .dashboard-panel h2.panel-title { font-size: 40px !important; text-align:left; font-weight:bold; } </style>--> <center> <style>.btn-primary { margin: 5px 10px 5px 0;font-size: 40px !important; }</style> <a href="URL for a website" target="blank" class="btn btn-primary"> Click here </a> </center> </html> </panel> </row> <row> <panel depends=dropdown A> <title>Magic</title> <table> <search> <query>Index=Run this search when drop down A </query> <earliest>$tokTime.earliest$</earliest> <latest>$tokTime.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <option name="wrap">false</option> </table> </panel>
I have created 5 Scheduled Reports to send a csv of the search results to the same email. Is there a way to have all 5 reports results csv file be in 1 email ? I tried looking up and someone me... See more...
I have created 5 Scheduled Reports to send a csv of the search results to the same email. Is there a way to have all 5 reports results csv file be in 1 email ? I tried looking up and someone mentioned that making all the search results into 1 report by appending the search results of each report. But in in many of my reports there is a lot going on already, wasn't sure if that method would put on a lot of strain and load time for the results to come up, then again, might mess things around to keep things organized. Each report is giving back a different host results hence making it into 1 big CSV is not applicable. If we can keep it the way it is with 5 different csv. file  reports but just sending 1 email with all 5 attachment that would be the ideal.
Multiple joins cause slowness in splunk dashboard?Is any other way to make faster? how  can we club those joins ?   index="xxx" applicationName="api" environment=$env$ timestamp correlationId trac... See more...
Multiple joins cause slowness in splunk dashboard?Is any other way to make faster? how  can we club those joins ?   index="xxx" applicationName="api" environment=$env$ timestamp correlationId trace message ("Ondemand Started*" OR "Expense Process started") |rename sourceFileName as SourceFileName content.JobName as JobName | eval "FileName/JobName"= coalesce(SourceFileName,JobName) | rename timestamp as Timestamp correlationId as CorrelationId tracePoint as Tracepoint message as Message | eval JobType=case(like('Message',"%Ondemand Started%"),"OnDemand",like('Message',"Expense Process started%"),"Scheduled", true() , "Unknown") | eval Message=trim(Message,"\"") | table Timestamp CorrelationId Tracepoint JobType "FileName/JobName" Message | join CorrelationId type=left [ search index="xxx" applicationName="api" trace=ERROR | rename correlationId as CorrelationId traceas TracePoint message as StatusMessage | dedup CorrelationId | table CorrelationId TracePoint StatusMessage] | table Timestamp CorrelationId TracePoint JobType "FileName/JobName" StatusMessage | join CorrelationId type=left [ search index="xxx" applicationName="api" message="*Before Calling flow archive-Concur*" | rename correlationId as CorrelationId content.loggerPayload.archiveFileName as ArchivedFileName | table CorrelationId ArchivedFileName] | table Timestamp CorrelationId Tracepoint JobType "FileName/JobName" ArchivedFileName StatusMessage    
Hello, 1.0.40 working fine for me. - clear your browser cache or use a different browser/incognito mode Note: the configuration changed. You need to set a new API key even for the Overview dashboa... See more...
Hello, 1.0.40 working fine for me. - clear your browser cache or use a different browser/incognito mode Note: the configuration changed. You need to set a new API key even for the Overview dashboards....
Any update on this? I installed v 1.0.40 and still have the same problem with moment.js.
I'd been intrigued about the idea, I was not able to find anything anywhere about the topic but Is it possible to use generative #AI to create Splunk#dashboards   It's a problem I have been wonderi... See more...
I'd been intrigued about the idea, I was not able to find anything anywhere about the topic but Is it possible to use generative #AI to create Splunk#dashboards   It's a problem I have been wondering if gen AI can augment, as creating dashboards can be a very steep learning curve or tricky to get started. It's a great area that I feel AI would be super useful and I'm sure the geniuses at Splunk are already looking into!
I think you'll need an external command to do that.