All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, 1) What is the difference between using "| summaryindex" and "| collect"? Thank you for your help. Summaryindex is generated by a scheduled report. I clicked "view recent" and the follo... See more...
Hello, 1) What is the difference between using "| summaryindex" and "| collect"? Thank you for your help. Summaryindex is generated by a scheduled report. I clicked "view recent" and the following is appended after the search.   | summaryindex spool=t uselb=t addtime=t index="summary" file="summary_test_1.stash_new" name="summary_test_1" marker="hostname=\"https://test.com/\",report=\"summary_test_1\""   Collect can be used to push outside of a scheduled report. 2) Can "| summary index" also be used to push data outside of a scheduled report?   | collect index=summary_test_1 testmode=false marker="report=summary_test_1"  
Hi, Need your assistance below We have created new csv lookup and we are using the below query but we are getting  all the data from the index & sourcetype . we need to get the events only for ... See more...
Hi, Need your assistance below We have created new csv lookup and we are using the below query but we are getting  all the data from the index & sourcetype . we need to get the events only for the hosts which mentioned on the lookup is the requirement Lookup name : Win_inventory.CSV used only one column called Server_name index=Nagio sourcetype=nagios:core:hard |lookup Win_inventory.CSV Server_name as host_name OUTPUTNEW Server_name. Server_name is not an existing interesting field
Am getting a warning of  DateParserVerbose - Accepted time (Wed Feb 14 17:01:12 2024) is suspiciously far away from previous event (Thu jan 18 17:01:12 2024) is still acceptable because it was e... See more...
Am getting a warning of  DateParserVerbose - Accepted time (Wed Feb 14 17:01:12 2024) is suspiciously far away from previous event (Thu jan 18 17:01:12 2024) is still acceptable because it was extracted by the same pattern  Is there any configuration that can help take this error away in splunk  
From last two days I am not receiving data in my Splunk internal index.  Please help me understand this issue .  
I have a use case where I want return multiple values from drop down. Label is same but I want to return more then one values    I tried to secondary value like this but its not working.   "stati... See more...
I have a use case where I want return multiple values from drop down. Label is same but I want to return more then one values    I tried to secondary value like this but its not working.   "statics": [],         "label": ">primary | seriesByName(\"id\") | renameSeries(\"label\") | formatByType(formattedConfig)",         "value": ">primary | seriesByName(\"id\") | renameSeries(\"value\") | formatByType(formattedConfig)",         "value1": ">secondery | seriesByName(\"link\") | renameSeries(\"value\") | formatByType(formattedConfig)"
Hi at all, I have to create a custom field at index time, I did it following the documentation but there's something wrong. The field to read is a parte of the source field 8as you can read in the ... See more...
Hi at all, I have to create a custom field at index time, I did it following the documentation but there's something wrong. The field to read is a parte of the source field 8as you can read in the REGEX. I deployed using a Deployment Server on my Heavy Forwarders an app contaning the following files: fields.conf props.conf transforms.conf in fields.conf I inserted   [fieldname] INDEXED = True   in props.conf I inserted:   [default] TRANSFORMS-abc = fieldname   in transforms.conf I inserted:   [fieldname] REGEX = /var/log/remote/([^/]+)/.* FORMAT = fieldname::$1 WRITE_META = true DEST_KEY = fieldname SOURCE_KEY = source REPEAT_MATCH = false LOOKAHEAD = 100    where's the error? what I missed? Thank you for your help. Ciao. Giuseppe
Hello, I am new to splunk and noticed we have two different authentication.conf files in the local folder.  I compared them two and they are the same except for information regarding groupbasedn and ... See more...
Hello, I am new to splunk and noticed we have two different authentication.conf files in the local folder.  I compared them two and they are the same except for information regarding groupbasedn and the host. Also the bind password doesn’t get hashed after splunk restart in one of the files. Unfortunately I don’t have more information on this and trying to understand what’s the reason for the 2nd file. If the bind password is updated in the 1st file, we are still not able to access splunk which makes me thing the 2nd file is needed for some reason. Would anyone be able to suggest the reasons for the 2nd file? I wonder if I need it. I’m concern with using the 2nd file as well since the password doesn’t get hashed. Thank you for your help! 
LogName=Application EventCode=1004 EventType=4 ComputerName=Test.local User=NOT_TRANSLATED Sid=S-1-5-21-2704069758-3089908202-2921546158-1104 SidType=0 SourceName=RoxioBurn Type=Information RecordNum... See more...
LogName=Application EventCode=1004 EventType=4 ComputerName=Test.local User=NOT_TRANSLATED Sid=S-1-5-21-2704069758-3089908202-2921546158-1104 SidType=0 SourceName=RoxioBurn Type=Information RecordNumber=16834 Keywords=Classic TaskCategory=Optical Disc OpCode=Info Message=Date: Wed Feb 28 14:22:59 2024 Computer Name: COM-HV01 User Name: Test\test.user Writing is completed on drive (E:). Project includes 0 folder(s) and 1 file(s). Volume Label: 2024-02-28 Volume SN: 0 Volume ID: \??\Volume{b282bf1c-3dde-11ed-b48e-806e6f6e6963} Type: Unknown Status Of Media: Appendable,Blank,Closed session Files: C:\ProgramData\Roxio Log Files\Test.test.user_20240228142142.txt SHA1: 7c347a6724dcd243d396f9bb5e560142f26b8aa4 File System: None Disc Number: 1 Encryption: Yes User Password: Yes Spanned Set: No Data Size On Disc Set: 511 Bytes Network Volume: No   How would I write an eval command to extract User Name: without domain, Status of Media, Data size on disc set, and files from the field Message?  
When trying to run the ML demos on my Macbook M2 running Splunk in a docker env I get the following error in the middle of the display: Error in 'fit' command: Failed to find Python for Scientific... See more...
When trying to run the ML demos on my Macbook M2 running Splunk in a docker env I get the following error in the middle of the display: Error in 'fit' command: Failed to find Python for Scientific Computing Add-on (Splunk_SA_Scientific_Python_linux_x86_64) After having installed both Linux x64 (yes it also runs fast in Rosetta 2) + the mac_Silicon one and restarted the server, the error still remains. Any help appreciated Thank you
When I navigate to Settings > Tokens, I get this error message:   KVStore is not ready. Token auth system will not work.   Splunk logs shows this:   ERROR JsonWebToken [233289 TcpChannelThread]... See more...
When I navigate to Settings > Tokens, I get this error message:   KVStore is not ready. Token auth system will not work.   Splunk logs shows this:   ERROR JsonWebToken [233289 TcpChannelThread] - KVStore is not ready. Token auth system will not work. ERROR KVStoreConfigurationProvider [233052 KVStoreConfigurationThread] - Failed to start mongod on first attempt reason=KVStore service will not start because kvstore process terminated ERROR KVStoreBulletinBoardManager [233053 MongodLogThread] - KV Store changed status to failed. KVStore process terminated..   How can this be fixed?  
Hello,  I have a query that gathers all the data from API calls, P90/P95 and P99 time, along with capturing API response times in time buckets (<1s, 1 to 3 seconds, till >10s) along with Avg and Pea... See more...
Hello,  I have a query that gathers all the data from API calls, P90/P95 and P99 time, along with capturing API response times in time buckets (<1s, 1 to 3 seconds, till >10s) along with Avg and Peak TPS, no matter how much I try, I am unable to get these to report hourly over the course of last 24 hours. I am using multiple joins as well in the query.  index= X | eval eTime = responsetime | stats count(responsetime) as TotalCalls, p90(responsetime) as P90Time,p95(responsetime) as P95Time, p99(responsetime) as P99Time by fi | eval P90Time=round(P90Time,2) | eval P95Time=round(P95Time,2) | eval P90Time=round(P90Time,2) | table TotalCalls,P90Time,P95Time,P99Time | join type=left uri [search index=X | eval pTime = responsetime | eval TimeFrames = case(pTime<=1, "0-1s%", pTime>1 AND pTime<=3, "1-3s%", pTime>3, ">3s%") | stats count as CallVolume by platform, TimeFrames | eventstats sum(CallVolume) as Total | eval Percentage=(CallVolume/Total)*100 | eval Percentage=round(Percentage,2) | chart values(Percentage) over platform by TimeFrames | sort -TimeFrames] | join type=left uri [search index=X | eval resptime = responsetime | bucket _time span=1s | stats count as TPS by _time,fi | stats max(TPS) as PeakTPS, avg(TPS) as AvgTPS by fi | eval AvgTPS=round(AvgTPS,2) | fields PeakTPS, AvgTPS] My stats currently look like this: TotalCalls P90Time P95Time P99Time 0-1s% 1-3s% AvgTPS Platform PeakTPS 1565113 0.35 0.44 1.283 98.09 1.91 434.75 abc 937   I just need these stats every hour over the course of last X days. I only able to get certain columns worth of data, but the chart in the first join and the fields in the second join are somehow messing it up.     
Hello Team, We have installed machine agent. But agent metrics are not being populated at controller. I can see the agent at controller GUI, it's status showing 100%.   Apart from that noting is be... See more...
Hello Team, We have installed machine agent. But agent metrics are not being populated at controller. I can see the agent at controller GUI, it's status showing 100%.   Apart from that noting is being reported. Below is error I saw in agent log file. [AD Thread Pool-Global0] 28 Feb 2024 15:11:34,385  WARN SystemAgentPollingForUpdate - Invalid response for configuration request from controller/could not connect. Msg: Fatal transport error while connecting to URL [/controller/instance/2698/systemagentpolling] Below are the appd parameters were being used:-> system_props="$system_props -Dappdynamics.controller.hostName=" system_props="$system_props -Dappdynamics.controller.port=8181" system_props="$system_props -Dappdynamics.agent.applicationName=" system_props="$system_props -Dappdynamics.agent.tierName=MCAG" system_props="$system_props -Dappdynamics.agent.nodeName=" system_props="$system_props -Dappdynamics.agent.accountName=customer1" system_props="$system_props -Dappdynamics.agent.accountAccessKey=" system_props="$system_props -Dappdynamics.controller.ssl.enabled=true" system_props="$system_props -Dappdynamics.force.default.ssl.certificate.validation=true" system_props="$system_props -Dappdynamics.sim.enabled=true" system_props="$system_props -Dappdynamics.machine.agent.extensions.linux.newFrameworkEnabled=false" system_props="$system_props -Dappdynamics.agent.uniqueHostId=`hostname -f`" system_props="$system_props -Dappdynamics.machine.agent.extensions.calcVolumeFreeAndUsedWithDfCommand=true" Regards, Amit Singh Bisht
</input> <input type="dropdown" token="project"> <label>Project</label> <choice value="tok1*">Token1</choice> <choice value="tok2*">Token2</choice> <default>tok1</default> <initialValue>tok1</i... See more...
</input> <input type="dropdown" token="project"> <label>Project</label> <choice value="tok1*">Token1</choice> <choice value="tok2*">Token2</choice> <default>tok1</default> <initialValue>tok1</initialValue> <change> <condition value="tok1"> <set token="x-key">key1-</set> </condition> <condition value="tok2"> <set token="x-key">key2-</set> </condition> </change> </input> <input type="multiselect" token="minorstate"> <label>minorstate</label> <choice value="*">All</choice> <choice value="&quot;a&quot;, &quot;b&quot;, &quot;c&quot;, &quot;d&quot;,">Minorstate</choice> <default>"""a"", ""b"", ""c"", ""d""</default> <prefix>(</prefix> <suffix>)</suffix> <initialValue>a,"b","c","d"</initialValue> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> , </delimiter> <fieldForLabel>minorstate</fieldForLabel> <fieldForValue>minorstate</fieldForValue> <search> <query>index=dunamis* sourcetype=dunamis_* producer=dunamis project=$project$ "x-key=$x-key$" | stats count by minorstate</query> <earliest>-15m</earliest> <latest>now</latest> </search> </input> The variables $project$ and $x-key$ are not getting replaced by the values that are being set in the dropdown. Can someone please help? Thank you!
Hi Splunkers, today I have a problem about understanding how and where Log Sources sends logs to Splunk. In this particular Splunk On Prem environments, no documentation has been done, except the HL... See more...
Hi Splunkers, today I have a problem about understanding how and where Log Sources sends logs to Splunk. In this particular Splunk On Prem environments, no documentation has been done, except the HLD. So, we have to understand, for each log source, what Splunk component it reaches and how. For example, if I have a Domain Controller, we must establish: Where it sends logs? Directly to Indexers? To a HF? A UF is installed on it? If not, how it send logs? WMI? WEF? Other And so on. Now, List of servers sending logs to Heavy forwarder is a community discussion where I started from @scelikok suggested search, changed it in:     index=_internal component=TcpOutputProc | stats count values(host) as host by idx | fields - count     and it helped me a lot: I'm able, for each Splunk Component of env (IDS, HF and so on) to understand what Log sources send them data. So, what's the problem? The above search return data forwarded by another Splunk component. I mean, in the output, field idx has always format ip/hostname:9997, so it means that data are coming from a server with UF or from another Splunk host (we have some intermediate forwarder, so sometimes I can see data ingested by an HF coming from another HF). What about data sent not with a Splunk agent/host? For example, suppose I have this flow: Log source with Syslog -> Splunk HF receive on port 514 With above search, I cannot see those sources (and I know for sure they exist on our env). How can I recover it? The syslog is only an example, the key point here is: I must complete my search with all log sources that do not use UF and/or any other Splunk element, but other forwarding tool/protocol (syslog, API, WEF, and so on).
Hi All,   I have got logs like below: Log1: </tr> <tr> <td >Apple</td> <td >59</td> <td >7</td> Log2: </tr> <tr> <td >Samsung</td> <td >61</td> <td >13</td> Log3: </tr> <tr> <td >Oppo</td> <td >... See more...
Hi All,   I have got logs like below: Log1: </tr> <tr> <td >Apple</td> <td >59</td> <td >7</td> Log2: </tr> <tr> <td >Samsung</td> <td >61</td> <td >13</td> Log3: </tr> <tr> <td >Oppo</td> <td >34</td> <td >5</td> Log4: </tr> <tr> <td >Vivo</td> <td >38</td> <td >11</td> I have used below query to extract fields from the data and the environment data is extracted from source. .... | rex field=_raw "\<tr\>\s+\<td\s\>(?P<Domain>[^\<]+)\<\/td\>\s+\<td\s\>(?P<Total>[^\<]+)\<\/td\>\s+\<td\s\>(?P<Issues>[^\<]+)\<\/td\>" | rex field=source "\/DashB\/[^\_]+\_(?P<Environment>[^\_]+)\_[^\.]+\.html" | eval Running=(Total - Issues) | stats sum(Running) as Running_count sum(Issues) as Issues_count by Environment Now I want to create a pie chart view with Running_count and Issues_count as the slices of the pie chart with respect to the environment. Please help to create/modify the query to get the desired visualization.   Your kind inputs are highly appreciated..!! Thank you..!!
Hello Everyone,   New to splunk in my current role,    we have had to downgrade our firewall version and switch it to a physical to virtual which changed the mac address on  the firewalls. before... See more...
Hello Everyone,   New to splunk in my current role,    we have had to downgrade our firewall version and switch it to a physical to virtual which changed the mac address on  the firewalls. before this downgrade the logs were coming in but now there stopped,   any help would be appreciated 
Has anyone, I'm looking to parse timestamps embedded in the bodies field of logs and use them as the official log timestamp. Fluentd offers regex parsing for this, but I'm seeking a solution withi... See more...
Has anyone, I'm looking to parse timestamps embedded in the bodies field of logs and use them as the official log timestamp. Fluentd offers regex parsing for this, but I'm seeking a solution within OTel's framework. Any guidance or examples would be greatly appreciated!
How do i set clearDefaultOnSelection to "true" as i don't want my multiselect panel to have some value when i search. 
Hello Splunk team... I am facing this issue while we run any searches on my splunk setup., can you help me on how we can fix this.. 02-29-2024 06:58:53.370 ERROR DispatchThread [4125 phase_1] - c... See more...
Hello Splunk team... I am facing this issue while we run any searches on my splunk setup., can you help me on how we can fix this.. 02-29-2024 06:58:53.370 ERROR DispatchThread [4125 phase_1] - code=10 error="" 02-29-2024 06:58:53.370 ERROR ResultsCollationProcessor [4125 phase_1] - SearchMessage orig_component=ResultsCollationProcessor sid=1709189933.399443_**** message_key=DISPATCHCOMM:PEER_PIPE_EXCEPTION__%s message=Search results might be incomplete: the search process on the peer:  ended prematurely. Check the peer log, such as $SPLUNK_HOME/var/log/splunk/splunkd.log and as well as the search.log for the particular search.   Thank you..
if select 24 hours in time filter, is there any automatic way to pass the 24hrs time rage to start date and end date??