All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello,  I have a query that gathers all the data from API calls, P90/P95 and P99 time, along with capturing API response times in time buckets (<1s, 1 to 3 seconds, till >10s) along with Avg and Pea... See more...
Hello,  I have a query that gathers all the data from API calls, P90/P95 and P99 time, along with capturing API response times in time buckets (<1s, 1 to 3 seconds, till >10s) along with Avg and Peak TPS, no matter how much I try, I am unable to get these to report hourly over the course of last 24 hours. I am using multiple joins as well in the query.  index= X | eval eTime = responsetime | stats count(responsetime) as TotalCalls, p90(responsetime) as P90Time,p95(responsetime) as P95Time, p99(responsetime) as P99Time by fi | eval P90Time=round(P90Time,2) | eval P95Time=round(P95Time,2) | eval P90Time=round(P90Time,2) | table TotalCalls,P90Time,P95Time,P99Time | join type=left uri [search index=X | eval pTime = responsetime | eval TimeFrames = case(pTime<=1, "0-1s%", pTime>1 AND pTime<=3, "1-3s%", pTime>3, ">3s%") | stats count as CallVolume by platform, TimeFrames | eventstats sum(CallVolume) as Total | eval Percentage=(CallVolume/Total)*100 | eval Percentage=round(Percentage,2) | chart values(Percentage) over platform by TimeFrames | sort -TimeFrames] | join type=left uri [search index=X | eval resptime = responsetime | bucket _time span=1s | stats count as TPS by _time,fi | stats max(TPS) as PeakTPS, avg(TPS) as AvgTPS by fi | eval AvgTPS=round(AvgTPS,2) | fields PeakTPS, AvgTPS] My stats currently look like this: TotalCalls P90Time P95Time P99Time 0-1s% 1-3s% AvgTPS Platform PeakTPS 1565113 0.35 0.44 1.283 98.09 1.91 434.75 abc 937   I just need these stats every hour over the course of last X days. I only able to get certain columns worth of data, but the chart in the first join and the fields in the second join are somehow messing it up.     
Are you saying you get raw events that are fragments of an HTML document.  In any case, even though HTML is not the ideal data format for data structure, treating it as text still carries the usual r... See more...
Are you saying you get raw events that are fragments of an HTML document.  In any case, even though HTML is not the ideal data format for data structure, treating it as text still carries the usual risks, therefore I advise against.  Use spath to pretend that it is XML. You didn't give enough snippet to show how Environment is actually coded and I don't want to speculate (read tea leaf), so I am going to use Vendor as groupby in my example.  This is what I  would do:   | spath | eval Vendor = mvindex('tr.td', 0) | eval Issues = tonumber(mvindex('tr.td', 2)) | eval Running = tonumber(mvindex('tr.td', 1)) - Issues | stats sum(Running) as Running_count sum(Issues) as Issues_count by Vendor   Here is an emulation you can play with and compare with real data:   | makeresults | eval log = mvappend("</tr> <tr> <td >Apple</td> <td >59</td> <td >7</td>", "</tr> <tr> <td >Samsung</td> <td >61</td> <td >13</td>", "</tr> <tr> <td >Oppo</td> <td >34</td> <td >5</td>", "</tr> <tr> <td >Vivo</td> <td >38</td> <td >11</td>") | mvexpand log | rename log AS _raw ``` data emulation above ```   Output of this emulation is Vendor Running_count Issues_count Apple 52 7 Oppo 29 5 Samsung 48 13 Vivo 27 11
Well I somehow fixed my problem, by going to the "Bucket-Status" page and "summarized" the affected bucket in the repair tasks tab. can someone explain what that did? I still do not get it.
Hi @richgalloway  Can I know what kind of third party software should be available to collect the value and send it to Splunk? Because I need "% Committed Bytes In Use" should be present in Per... See more...
Hi @richgalloway  Can I know what kind of third party software should be available to collect the value and send it to Splunk? Because I need "% Committed Bytes In Use" should be present in Perfmon :Process stanza , means  this "% Committed Bytes In Use" should be present in that counter list. so how can get this added? Thanks
Well, no. While the addon might contain some faulty definitions, it won't prevent the environment from searching in general and cause licensing warning.
So after looking around in Splunk I found the bucket  Any Ideas what to do with that? Bucket-Status _internal~1260~8D19E36A-C3DF-465D-9B7E-908324F333E5 Aktion _internal does not meet: primac... See more...
So after looking around in Splunk I found the bucket  Any Ideas what to do with that? Bucket-Status _internal~1260~8D19E36A-C3DF-465D-9B7E-908324F333E5 Aktion _internal does not meet: primacy & rf 3 Tag(e) 20 Stunde(n) 10 Minute(n) Cannot replicate as bucket hasn't rolled yet.
Hello Team, We have installed machine agent. But agent metrics are not being populated at controller. I can see the agent at controller GUI, it's status showing 100%.   Apart from that noting is be... See more...
Hello Team, We have installed machine agent. But agent metrics are not being populated at controller. I can see the agent at controller GUI, it's status showing 100%.   Apart from that noting is being reported. Below is error I saw in agent log file. [AD Thread Pool-Global0] 28 Feb 2024 15:11:34,385  WARN SystemAgentPollingForUpdate - Invalid response for configuration request from controller/could not connect. Msg: Fatal transport error while connecting to URL [/controller/instance/2698/systemagentpolling] Below are the appd parameters were being used:-> system_props="$system_props -Dappdynamics.controller.hostName=" system_props="$system_props -Dappdynamics.controller.port=8181" system_props="$system_props -Dappdynamics.agent.applicationName=" system_props="$system_props -Dappdynamics.agent.tierName=MCAG" system_props="$system_props -Dappdynamics.agent.nodeName=" system_props="$system_props -Dappdynamics.agent.accountName=customer1" system_props="$system_props -Dappdynamics.agent.accountAccessKey=" system_props="$system_props -Dappdynamics.controller.ssl.enabled=true" system_props="$system_props -Dappdynamics.force.default.ssl.certificate.validation=true" system_props="$system_props -Dappdynamics.sim.enabled=true" system_props="$system_props -Dappdynamics.machine.agent.extensions.linux.newFrameworkEnabled=false" system_props="$system_props -Dappdynamics.agent.uniqueHostId=`hostname -f`" system_props="$system_props -Dappdynamics.machine.agent.extensions.calcVolumeFreeAndUsedWithDfCommand=true" Regards, Amit Singh Bisht
</input> <input type="dropdown" token="project"> <label>Project</label> <choice value="tok1*">Token1</choice> <choice value="tok2*">Token2</choice> <default>tok1</default> <initialValue>tok1</i... See more...
</input> <input type="dropdown" token="project"> <label>Project</label> <choice value="tok1*">Token1</choice> <choice value="tok2*">Token2</choice> <default>tok1</default> <initialValue>tok1</initialValue> <change> <condition value="tok1"> <set token="x-key">key1-</set> </condition> <condition value="tok2"> <set token="x-key">key2-</set> </condition> </change> </input> <input type="multiselect" token="minorstate"> <label>minorstate</label> <choice value="*">All</choice> <choice value="&quot;a&quot;, &quot;b&quot;, &quot;c&quot;, &quot;d&quot;,">Minorstate</choice> <default>"""a"", ""b"", ""c"", ""d""</default> <prefix>(</prefix> <suffix>)</suffix> <initialValue>a,"b","c","d"</initialValue> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> , </delimiter> <fieldForLabel>minorstate</fieldForLabel> <fieldForValue>minorstate</fieldForValue> <search> <query>index=dunamis* sourcetype=dunamis_* producer=dunamis project=$project$ "x-key=$x-key$" | stats count by minorstate</query> <earliest>-15m</earliest> <latest>now</latest> </search> </input> The variables $project$ and $x-key$ are not getting replaced by the values that are being set in the dropdown. Can someone please help? Thank you!
Hi Splunkers, today I have a problem about understanding how and where Log Sources sends logs to Splunk. In this particular Splunk On Prem environments, no documentation has been done, except the HL... See more...
Hi Splunkers, today I have a problem about understanding how and where Log Sources sends logs to Splunk. In this particular Splunk On Prem environments, no documentation has been done, except the HLD. So, we have to understand, for each log source, what Splunk component it reaches and how. For example, if I have a Domain Controller, we must establish: Where it sends logs? Directly to Indexers? To a HF? A UF is installed on it? If not, how it send logs? WMI? WEF? Other And so on. Now, List of servers sending logs to Heavy forwarder is a community discussion where I started from @scelikok suggested search, changed it in:     index=_internal component=TcpOutputProc | stats count values(host) as host by idx | fields - count     and it helped me a lot: I'm able, for each Splunk Component of env (IDS, HF and so on) to understand what Log sources send them data. So, what's the problem? The above search return data forwarded by another Splunk component. I mean, in the output, field idx has always format ip/hostname:9997, so it means that data are coming from a server with UF or from another Splunk host (we have some intermediate forwarder, so sometimes I can see data ingested by an HF coming from another HF). What about data sent not with a Splunk agent/host? For example, suppose I have this flow: Log source with Syslog -> Splunk HF receive on port 514 With above search, I cannot see those sources (and I know for sure they exist on our env). How can I recover it? The syslog is only an example, the key point here is: I must complete my search with all log sources that do not use UF and/or any other Splunk element, but other forwarding tool/protocol (syslog, API, WEF, and so on).
Hi All,   I have got logs like below: Log1: </tr> <tr> <td >Apple</td> <td >59</td> <td >7</td> Log2: </tr> <tr> <td >Samsung</td> <td >61</td> <td >13</td> Log3: </tr> <tr> <td >Oppo</td> <td >... See more...
Hi All,   I have got logs like below: Log1: </tr> <tr> <td >Apple</td> <td >59</td> <td >7</td> Log2: </tr> <tr> <td >Samsung</td> <td >61</td> <td >13</td> Log3: </tr> <tr> <td >Oppo</td> <td >34</td> <td >5</td> Log4: </tr> <tr> <td >Vivo</td> <td >38</td> <td >11</td> I have used below query to extract fields from the data and the environment data is extracted from source. .... | rex field=_raw "\<tr\>\s+\<td\s\>(?P<Domain>[^\<]+)\<\/td\>\s+\<td\s\>(?P<Total>[^\<]+)\<\/td\>\s+\<td\s\>(?P<Issues>[^\<]+)\<\/td\>" | rex field=source "\/DashB\/[^\_]+\_(?P<Environment>[^\_]+)\_[^\.]+\.html" | eval Running=(Total - Issues) | stats sum(Running) as Running_count sum(Issues) as Issues_count by Environment Now I want to create a pie chart view with Running_count and Issues_count as the slices of the pie chart with respect to the environment. Please help to create/modify the query to get the desired visualization.   Your kind inputs are highly appreciated..!! Thank you..!!
Hello Ryan, The error was related to the stale connection. After clearing the  clearing the stale nodes from controller end it got resolved.
Hello Everyone,   New to splunk in my current role,    we have had to downgrade our firewall version and switch it to a physical to virtual which changed the mac address on  the firewalls. before... See more...
Hello Everyone,   New to splunk in my current role,    we have had to downgrade our firewall version and switch it to a physical to virtual which changed the mac address on  the firewalls. before this downgrade the logs were coming in but now there stopped,   any help would be appreciated 
Has anyone, I'm looking to parse timestamps embedded in the bodies field of logs and use them as the official log timestamp. Fluentd offers regex parsing for this, but I'm seeking a solution withi... See more...
Has anyone, I'm looking to parse timestamps embedded in the bodies field of logs and use them as the official log timestamp. Fluentd offers regex parsing for this, but I'm seeking a solution within OTel's framework. Any guidance or examples would be greatly appreciated!
@HarishSamudrala  The error message you provided indicates that search results might be incomplete due to the search process ending prematurely on the peer.  Check Peer Logs: Look into the peer lo... See more...
@HarishSamudrala  The error message you provided indicates that search results might be incomplete due to the search process ending prematurely on the peer.  Check Peer Logs: Look into the peer log files, specifically: $SPLUNK_HOME/var/log/splunk/splunkd.log The search.log for the particular search. Examine these logs for any relevant error messages or clues about what caused the premature termination. Memory and Resource Constraints: Ensure that the peer has sufficient resources (CPU, memory, disk space) to handle the search workload. Sometimes, insufficient resources can lead to premature search process termination. Consider monitoring system resource usage during search execution. License Considerations: If you’re using a trial Splunk Enterprise distributed deployment, each instance must use its own self-generated Enterprise Trial license. In contrast, a distributed deployment running a Splunk Enterprise license requires configuring a license master to host all licenses. Check for OOM Killer Events: Review /var/log/messages on the peer for any Out-of-Memory (OOM) Killer events. Insufficient memory can cause processes to terminate unexpectedly. Increase ulimits for Open Files: If you haven’t already, consider increasing the ulimits for open files on the indexers. For example, set the ulimit to the recommended 64000 (initially it might be set to 4096). Review Configuration: Verify that the configuration of your search head, indexers, and forwarders is correct. Ensure that the search head can communicate with the peer properly. Remember to investigate the specific details in the logs to pinpoint the root cause. If you encounter any specific error messages or need further assistance, feel free to share additional details.  Solved: Search results might be incomplete: the search pro... - Splunk Community  https://community.splunk.com/t5/Splunk-Search/Search-results-might-be-incomplete-the-search-process-on-the/m-p/617673 
How do i set clearDefaultOnSelection to "true" as i don't want my multiselect panel to have some value when i search. 
Hi @man03359, this seems to be Splunk Cloud, in this case you don't need to manage the buckets. Buckets managing and configuration is required only do on-premise installation. For Splunk Cloud, yo... See more...
Hi @man03359, this seems to be Splunk Cloud, in this case you don't need to manage the buckets. Buckets managing and configuration is required only do on-premise installation. For Splunk Cloud, you have only to define how long you want to store data, also because, by default, you have 90 day and if you want a longer period, you have to pay for the additional storage. Ciao. Giuseppe  
Hello Splunk team... I am facing this issue while we run any searches on my splunk setup., can you help me on how we can fix this.. 02-29-2024 06:58:53.370 ERROR DispatchThread [4125 phase_1] - c... See more...
Hello Splunk team... I am facing this issue while we run any searches on my splunk setup., can you help me on how we can fix this.. 02-29-2024 06:58:53.370 ERROR DispatchThread [4125 phase_1] - code=10 error="" 02-29-2024 06:58:53.370 ERROR ResultsCollationProcessor [4125 phase_1] - SearchMessage orig_component=ResultsCollationProcessor sid=1709189933.399443_**** message_key=DISPATCHCOMM:PEER_PIPE_EXCEPTION__%s message=Search results might be incomplete: the search process on the peer:  ended prematurely. Check the peer log, such as $SPLUNK_HOME/var/log/splunk/splunkd.log and as well as the search.log for the particular search.   Thank you..
@gcusello  So it means if we set the search retention period as 90 days under here- It is stays at hot, warm, and cold during those 90 days and post 90 days rolls to frozen bucket?
Like they say in the olden days, Linux - eh Splunk, can do anything except brew coffee.  Can you qualify your requirement?  Is the time range from a dashboard's data input of type Time?  In that case... See more...
Like they say in the olden days, Linux - eh Splunk, can do anything except brew coffee.  Can you qualify your requirement?  Is the time range from a dashboard's data input of type Time?  In that case, starttime and endtime are in the token name that you give the input.  If you want a specific presentation of those values in a search, you just use the likes of strftime to manipulate them. If you want specific help, you need to clearly state your use case including desired output.  If you want to use one selector to set values in other selectors as your mock screenshot seems to suggest, that is doable, too.  But you need to describe the desired behavior in unmistakable detail.
Hi @lbrhyne  You can do one simple idea - just search for a 5 digit numbers in your logs(Pls check the logs and see if there are any other 5 digit numbers) | makeresults | eval log="run.\r\nTimefra... See more...
Hi @lbrhyne  You can do one simple idea - just search for a 5 digit numbers in your logs(Pls check the logs and see if there are any other 5 digit numbers) | makeresults | eval log="run.\r\nTimeframe (PT) Success Failed % Failed\r\n\r\n05:15-06:14\r\n\r\n58570\r\n\r\n681\r\n\r\n1.15\r\n\r\nIf you believe you've received this email in error, please see your Splunk\"}" | rex field=log (?P<Successful>\d{5}) | table log Successful