All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Team, We have installed machine agent. But agent metrics are not being populated at controller. I can see the agent at controller GUI, it's status showing 100%.   Apart from that noting is be... See more...
Hello Team, We have installed machine agent. But agent metrics are not being populated at controller. I can see the agent at controller GUI, it's status showing 100%.   Apart from that noting is being reported. Below is error I saw in agent log file. [AD Thread Pool-Global0] 28 Feb 2024 15:11:34,385  WARN SystemAgentPollingForUpdate - Invalid response for configuration request from controller/could not connect. Msg: Fatal transport error while connecting to URL [/controller/instance/2698/systemagentpolling] Below are the appd parameters were being used:-> system_props="$system_props -Dappdynamics.controller.hostName=" system_props="$system_props -Dappdynamics.controller.port=8181" system_props="$system_props -Dappdynamics.agent.applicationName=" system_props="$system_props -Dappdynamics.agent.tierName=MCAG" system_props="$system_props -Dappdynamics.agent.nodeName=" system_props="$system_props -Dappdynamics.agent.accountName=customer1" system_props="$system_props -Dappdynamics.agent.accountAccessKey=" system_props="$system_props -Dappdynamics.controller.ssl.enabled=true" system_props="$system_props -Dappdynamics.force.default.ssl.certificate.validation=true" system_props="$system_props -Dappdynamics.sim.enabled=true" system_props="$system_props -Dappdynamics.machine.agent.extensions.linux.newFrameworkEnabled=false" system_props="$system_props -Dappdynamics.agent.uniqueHostId=`hostname -f`" system_props="$system_props -Dappdynamics.machine.agent.extensions.calcVolumeFreeAndUsedWithDfCommand=true" Regards, Amit Singh Bisht
</input> <input type="dropdown" token="project"> <label>Project</label> <choice value="tok1*">Token1</choice> <choice value="tok2*">Token2</choice> <default>tok1</default> <initialValue>tok1</i... See more...
</input> <input type="dropdown" token="project"> <label>Project</label> <choice value="tok1*">Token1</choice> <choice value="tok2*">Token2</choice> <default>tok1</default> <initialValue>tok1</initialValue> <change> <condition value="tok1"> <set token="x-key">key1-</set> </condition> <condition value="tok2"> <set token="x-key">key2-</set> </condition> </change> </input> <input type="multiselect" token="minorstate"> <label>minorstate</label> <choice value="*">All</choice> <choice value="&quot;a&quot;, &quot;b&quot;, &quot;c&quot;, &quot;d&quot;,">Minorstate</choice> <default>"""a"", ""b"", ""c"", ""d""</default> <prefix>(</prefix> <suffix>)</suffix> <initialValue>a,"b","c","d"</initialValue> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> , </delimiter> <fieldForLabel>minorstate</fieldForLabel> <fieldForValue>minorstate</fieldForValue> <search> <query>index=dunamis* sourcetype=dunamis_* producer=dunamis project=$project$ "x-key=$x-key$" | stats count by minorstate</query> <earliest>-15m</earliest> <latest>now</latest> </search> </input> The variables $project$ and $x-key$ are not getting replaced by the values that are being set in the dropdown. Can someone please help? Thank you!
Hi Splunkers, today I have a problem about understanding how and where Log Sources sends logs to Splunk. In this particular Splunk On Prem environments, no documentation has been done, except the HL... See more...
Hi Splunkers, today I have a problem about understanding how and where Log Sources sends logs to Splunk. In this particular Splunk On Prem environments, no documentation has been done, except the HLD. So, we have to understand, for each log source, what Splunk component it reaches and how. For example, if I have a Domain Controller, we must establish: Where it sends logs? Directly to Indexers? To a HF? A UF is installed on it? If not, how it send logs? WMI? WEF? Other And so on. Now, List of servers sending logs to Heavy forwarder is a community discussion where I started from @scelikok suggested search, changed it in:     index=_internal component=TcpOutputProc | stats count values(host) as host by idx | fields - count     and it helped me a lot: I'm able, for each Splunk Component of env (IDS, HF and so on) to understand what Log sources send them data. So, what's the problem? The above search return data forwarded by another Splunk component. I mean, in the output, field idx has always format ip/hostname:9997, so it means that data are coming from a server with UF or from another Splunk host (we have some intermediate forwarder, so sometimes I can see data ingested by an HF coming from another HF). What about data sent not with a Splunk agent/host? For example, suppose I have this flow: Log source with Syslog -> Splunk HF receive on port 514 With above search, I cannot see those sources (and I know for sure they exist on our env). How can I recover it? The syslog is only an example, the key point here is: I must complete my search with all log sources that do not use UF and/or any other Splunk element, but other forwarding tool/protocol (syslog, API, WEF, and so on).
Hi All,   I have got logs like below: Log1: </tr> <tr> <td >Apple</td> <td >59</td> <td >7</td> Log2: </tr> <tr> <td >Samsung</td> <td >61</td> <td >13</td> Log3: </tr> <tr> <td >Oppo</td> <td >... See more...
Hi All,   I have got logs like below: Log1: </tr> <tr> <td >Apple</td> <td >59</td> <td >7</td> Log2: </tr> <tr> <td >Samsung</td> <td >61</td> <td >13</td> Log3: </tr> <tr> <td >Oppo</td> <td >34</td> <td >5</td> Log4: </tr> <tr> <td >Vivo</td> <td >38</td> <td >11</td> I have used below query to extract fields from the data and the environment data is extracted from source. .... | rex field=_raw "\<tr\>\s+\<td\s\>(?P<Domain>[^\<]+)\<\/td\>\s+\<td\s\>(?P<Total>[^\<]+)\<\/td\>\s+\<td\s\>(?P<Issues>[^\<]+)\<\/td\>" | rex field=source "\/DashB\/[^\_]+\_(?P<Environment>[^\_]+)\_[^\.]+\.html" | eval Running=(Total - Issues) | stats sum(Running) as Running_count sum(Issues) as Issues_count by Environment Now I want to create a pie chart view with Running_count and Issues_count as the slices of the pie chart with respect to the environment. Please help to create/modify the query to get the desired visualization.   Your kind inputs are highly appreciated..!! Thank you..!!
Hello Ryan, The error was related to the stale connection. After clearing the  clearing the stale nodes from controller end it got resolved.
Hello Everyone,   New to splunk in my current role,    we have had to downgrade our firewall version and switch it to a physical to virtual which changed the mac address on  the firewalls. before... See more...
Hello Everyone,   New to splunk in my current role,    we have had to downgrade our firewall version and switch it to a physical to virtual which changed the mac address on  the firewalls. before this downgrade the logs were coming in but now there stopped,   any help would be appreciated 
Has anyone, I'm looking to parse timestamps embedded in the bodies field of logs and use them as the official log timestamp. Fluentd offers regex parsing for this, but I'm seeking a solution withi... See more...
Has anyone, I'm looking to parse timestamps embedded in the bodies field of logs and use them as the official log timestamp. Fluentd offers regex parsing for this, but I'm seeking a solution within OTel's framework. Any guidance or examples would be greatly appreciated!
@HarishSamudrala  The error message you provided indicates that search results might be incomplete due to the search process ending prematurely on the peer.  Check Peer Logs: Look into the peer lo... See more...
@HarishSamudrala  The error message you provided indicates that search results might be incomplete due to the search process ending prematurely on the peer.  Check Peer Logs: Look into the peer log files, specifically: $SPLUNK_HOME/var/log/splunk/splunkd.log The search.log for the particular search. Examine these logs for any relevant error messages or clues about what caused the premature termination. Memory and Resource Constraints: Ensure that the peer has sufficient resources (CPU, memory, disk space) to handle the search workload. Sometimes, insufficient resources can lead to premature search process termination. Consider monitoring system resource usage during search execution. License Considerations: If you’re using a trial Splunk Enterprise distributed deployment, each instance must use its own self-generated Enterprise Trial license. In contrast, a distributed deployment running a Splunk Enterprise license requires configuring a license master to host all licenses. Check for OOM Killer Events: Review /var/log/messages on the peer for any Out-of-Memory (OOM) Killer events. Insufficient memory can cause processes to terminate unexpectedly. Increase ulimits for Open Files: If you haven’t already, consider increasing the ulimits for open files on the indexers. For example, set the ulimit to the recommended 64000 (initially it might be set to 4096). Review Configuration: Verify that the configuration of your search head, indexers, and forwarders is correct. Ensure that the search head can communicate with the peer properly. Remember to investigate the specific details in the logs to pinpoint the root cause. If you encounter any specific error messages or need further assistance, feel free to share additional details.  Solved: Search results might be incomplete: the search pro... - Splunk Community  https://community.splunk.com/t5/Splunk-Search/Search-results-might-be-incomplete-the-search-process-on-the/m-p/617673 
How do i set clearDefaultOnSelection to "true" as i don't want my multiselect panel to have some value when i search. 
Hi @man03359, this seems to be Splunk Cloud, in this case you don't need to manage the buckets. Buckets managing and configuration is required only do on-premise installation. For Splunk Cloud, yo... See more...
Hi @man03359, this seems to be Splunk Cloud, in this case you don't need to manage the buckets. Buckets managing and configuration is required only do on-premise installation. For Splunk Cloud, you have only to define how long you want to store data, also because, by default, you have 90 day and if you want a longer period, you have to pay for the additional storage. Ciao. Giuseppe  
Hello Splunk team... I am facing this issue while we run any searches on my splunk setup., can you help me on how we can fix this.. 02-29-2024 06:58:53.370 ERROR DispatchThread [4125 phase_1] - c... See more...
Hello Splunk team... I am facing this issue while we run any searches on my splunk setup., can you help me on how we can fix this.. 02-29-2024 06:58:53.370 ERROR DispatchThread [4125 phase_1] - code=10 error="" 02-29-2024 06:58:53.370 ERROR ResultsCollationProcessor [4125 phase_1] - SearchMessage orig_component=ResultsCollationProcessor sid=1709189933.399443_**** message_key=DISPATCHCOMM:PEER_PIPE_EXCEPTION__%s message=Search results might be incomplete: the search process on the peer:  ended prematurely. Check the peer log, such as $SPLUNK_HOME/var/log/splunk/splunkd.log and as well as the search.log for the particular search.   Thank you..
@gcusello  So it means if we set the search retention period as 90 days under here- It is stays at hot, warm, and cold during those 90 days and post 90 days rolls to frozen bucket?
Like they say in the olden days, Linux - eh Splunk, can do anything except brew coffee.  Can you qualify your requirement?  Is the time range from a dashboard's data input of type Time?  In that case... See more...
Like they say in the olden days, Linux - eh Splunk, can do anything except brew coffee.  Can you qualify your requirement?  Is the time range from a dashboard's data input of type Time?  In that case, starttime and endtime are in the token name that you give the input.  If you want a specific presentation of those values in a search, you just use the likes of strftime to manipulate them. If you want specific help, you need to clearly state your use case including desired output.  If you want to use one selector to set values in other selectors as your mock screenshot seems to suggest, that is doable, too.  But you need to describe the desired behavior in unmistakable detail.
Hi @lbrhyne  You can do one simple idea - just search for a 5 digit numbers in your logs(Pls check the logs and see if there are any other 5 digit numbers) | makeresults | eval log="run.\r\nTimefra... See more...
Hi @lbrhyne  You can do one simple idea - just search for a 5 digit numbers in your logs(Pls check the logs and see if there are any other 5 digit numbers) | makeresults | eval log="run.\r\nTimeframe (PT) Success Failed % Failed\r\n\r\n05:15-06:14\r\n\r\n58570\r\n\r\n681\r\n\r\n1.15\r\n\r\nIf you believe you've received this email in error, please see your Splunk\"}" | rex field=log (?P<Successful>\d{5}) | table log Successful
if select 24 hours in time filter, is there any automatic way to pass the 24hrs time rage to start date and end date??
Hi @lbrhyne, there's an issue in field extraction in Splunk when you have in your logs one or more backslashes, in my opinion it's a bug and I discussed with Support specialists about this. In this... See more...
Hi @lbrhyne, there's an issue in field extraction in Splunk when you have in your logs one or more backslashes, in my opinion it's a bug and I discussed with Support specialists about this. In this case use (trying) three or four backslashes instead of two as in regex101. You must use this workaround if you use the rex command in a search. If instead you want to use the regex for a field extraction not in a search, you have to use the regex that works in regex101. ciao. Giuseppe
Hi @man03359 , at first, in frozenTimePeriodInSecs, don't use commas. then, the meaning of the four statuses is the following: Hot: just indexed data, in a bucket with in progress tsdindexes creat... See more...
Hi @man03359 , at first, in frozenTimePeriodInSecs, don't use commas. then, the meaning of the four statuses is the following: Hot: just indexed data, in a bucket with in progress tsdindexes creation and usable for on-line searches, Warm: data indexed from few days, that are used by the most searches and usable for on-line searches, they usually are located in high performances storage (at least 800 IOPS, better more), Cold: not so recent data, used by few searches and usable for on-line searches, they usually are located in less expensive storages, Frozen: data that are stored off line but that it's possible to recoved copying the entire bucket in the thawed folder, to have frozen data, you must configure Splunk to save them, by default dey are deleted. Data roll to frozed after the earliest event of a bucket exceeds the retention period, for this reason you could have , in your searches, data before the retention period. if you use a short retention period and you index few data, your bucket could directly pass from Warm to frozen or be deleted. It's very difficoult that a data directly pass from Hot to Frozed because a bucket rolls from Hot to Warm when it reaches 10 GB or after three days, you should have a retention period less than three days and have less than 10 GB in this period. For more details see at https://docs.splunk.com/Documentation/Splunk/9.2.0/Indexer/Setaretirementandarchivingpolicy and https://docs.splunk.com/Documentation/Splunk/9.2.0/Indexer/Howindexingworks Ciao. Giuseppe
Have a look at Replicate a subset of data to a third-party system You can modify it and do something like this props.conf [your-sourcetype-here] TRANSFORMS-routing = routeAll transforms.conf [rou... See more...
Have a look at Replicate a subset of data to a third-party system You can modify it and do something like this props.conf [your-sourcetype-here] TRANSFORMS-routing = routeAll transforms.conf [routeAll] REGEX=(.) DEST_KEY=_TCP_ROUTING FORMAT=yourIndexer,ThirdParty outputs.conf [tcpout] defaultGroup=nothing [tcpout:yourIndexer] disabled=false server=10.1.12.1:9997 [tcpout:ThirdParty] disabled=false sendCookedData=false server=10.1.12.2:1234  
I'm trying to forward logs base on index to a third-party system, and at the same time, I still need to retain the logs in Splunk. I've tried adding tcpout in outputs.conf, but it only pushing all lo... See more...
I'm trying to forward logs base on index to a third-party system, and at the same time, I still need to retain the logs in Splunk. I've tried adding tcpout in outputs.conf, but it only pushing all logs to the third-party system, and doesn't store logs into Splunk. Unable to search new log in Splunk. [tcpout] defaultGroup=index1   [tcpout:index1] sendCookedData=false (tried with and without this, both doesn't work) server=1.1.1.1:12468
Hi, I am starting with splunk admin and is confused about one topic. It might be silly. While creating an index, we get the option to set the Searchable Retention (in days), I have read from the do... See more...
Hi, I am starting with splunk admin and is confused about one topic. It might be silly. While creating an index, we get the option to set the Searchable Retention (in days), I have read from the documents that splunk has 4 bucket, hot, warm, cold, and frozen. My question is suppose I have set it as 90 days, while this 90 days period will the data be in hot bucket for the entire 90 days and will roll to frozen after 90 days period is over. Also how different is setting 90 days under the Searchable Retention and setting this below- [main] frozenTimePeriodInSecs = 7,776,000  Please explain. Thanks in advance.