All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi need to generate current date like this "20201123" and use as a search filter on metadata. AFAIK there is no "_time" in metadata so need to generate current date for search filter.   here ... See more...
Hi need to generate current date like this "20201123" and use as a search filter on metadata. AFAIK there is no "_time" in metadata so need to generate current date for search filter.   here is my query,  |metadata type=sources index="app" |table source   any idea? Thanks,  
I want to restrict a user access based on the field present in the datamodel, however under roles->restrictions, even after entering the datamodel field and its value,  it is not filtering the data. ... See more...
I want to restrict a user access based on the field present in the datamodel, however under roles->restrictions, even after entering the datamodel field and its value,  it is not filtering the data. This is required because in the dashboard, the user is seeing all the data in the datamodel. How can this be achieved? Thanks in advance! Eg: User - Test01 Datamodel Field - testfield Test01 user should see the dashboard with the datamodel data filtered by the above testfield
So I have some data like below in my _raw: Name: BES Client, Running as: LocalSystem, Path: ""C:\Program Files (x86)\BigFix Enterprise\BES Client\BESClient.exe"", SHA1: 5bf0d29324081f2f830f7e66bba7a... See more...
So I have some data like below in my _raw: Name: BES Client, Running as: LocalSystem, Path: ""C:\Program Files (x86)\BigFix Enterprise\BES Client\BESClient.exe"", SHA1: 5bf0d29324081f2f830f7e66bba7aa5cb1c46047 Name: BESClientHelper, Running as: LocalSystem, Path: ""C:\Program Files (x86)\BigFix Enterprise\BES Client\BESClientHelper.exe"", SHA1: c989ae2278a9f8d6d5c5ca90fca6a57d19b168b8  Name: svchost.exe, PID: 424, PPID: 432, ( Started up: Mon, 19 Sep 2022 03:41:57 -0700 ), Running as: NT AUTHORITY\LOCAL SERVICE, Path: C:\Windows\System32\svchost.exe, SHA1: 3196f45b269a614a3926efc032fc9d75017f27e8 Name: scsrvc.exe, PID: 1384, PPID: 432, ( Started up: Mon, 19 Sep 2022 03:42:34 -0700 ), Running as: NT AUTHORITY\SYSTEM, Path: C:\Program Files\McAfee\Solidcore\scsrvc.exe, SHA1: ef1cc70f3e052a6c480ac2fe8cdfe21a502669cc I am trying to parse out just the "running" process name like "BES Client" or "BESClientHelper", however it has to have the text "Running" behind it so I know its a running process. Not the two "exe" files crossed out above. Make sense? Thanks!!
Hey Splukers Its a distributed environment. we created index in Cluster Master. We can access the indexes in SH cluster member.. Now I need to create user roles and add specific index to that... See more...
Hey Splukers Its a distributed environment. we created index in Cluster Master. We can access the indexes in SH cluster member.. Now I need to create user roles and add specific index to that role //while doing that the indexes created in CM is not listing  then I tried  to check in SH .. setting->indexes.. those indexes not listing.. what could be a reason? Note: in SH i can see the data if i put index=<index >name.. where i need to start my analysis?  
Hello having some confusing problems with Splunk permissions that I am trying to understand. Little background we upgrade our index/deployment server from Debian to ubuntu.   here is the problem I ... See more...
Hello having some confusing problems with Splunk permissions that I am trying to understand. Little background we upgrade our index/deployment server from Debian to ubuntu.   here is the problem I am seeing after this upgrade.   I was monitoring a file in var/log/test-combo.log  and everything worked before hand on debian 11. Now I am not getting any of the data from this file ingested into my index but I can see fresh logs. The file is owned by syslog and the group is adm. My splunk user: uid=1001(splunk) gid=1001(splunk) groups=1001(splunk),4(adm) I wanted to do a test and I went under Data Inputs > Files & Directories > New Local File & Directory > Browse > Var > Log the strange thing was that I can see half of the logs and half of the directories under there. All the directories and files that I can root:root and had other: r-- set permissions the file in question (test-combo.log) didn't have other:r-- permissions set.  So why is splunk able to see files with these permissions # file: vpn.log # owner: root # group: root user::rw- group::rw- other::r--   and not able to see files with this permission   # file: test-combo.log # owner: syslog # group: adm user::rw- group::r-- other::--- is it because other is not set to read perms? What would be the significance of setting other to read?
I'm analysing VPN connection logs to produce a report of the count of staff working from home for longer than 6 hours a day. Unfortunately the VPN session isn't started and ended by the staff membe... See more...
I'm analysing VPN connection logs to produce a report of the count of staff working from home for longer than 6 hours a day. Unfortunately the VPN session isn't started and ended by the staff member - the VPN just writes a log when data is sent. This means there is nothing that can be used as a start or end of flag in the data. I have tried using the TRANSACTION command, the Username as the unique element, and set the 'maxpause' between sessions to 65minutes. Example query:   index=VPN sourcetype=VPNlog | transaction Username maxevents=-1 maxpause=65m | stats sum(duration) as Duration BY Username   This worked for small sample sets of data. I could then extract the count of staff who's total duration of sessions was over 6 hours a day. However when I attempted to run this same query over the complete set of data it produced an incomplete set of results along with the message: "Some transactions have been discarded. To include them, add keepevicted=true to your transaction command." Enabling keepevicted produces more results but the figures are incorrect - I assume there are still too many events for the transaction command to analyse? After reading about the limitations of the transaction command I tried using STATS in its place - this works far quicker and for all the data except it can't there doesn't seem to be an equivalent of 'maxpause' to end a session. Instead the duration of a staff members session always ends up being the duration from the start of their first connection to the end of their last - which leads to people appearing for work 12 hour days because they logon remotely in the morning for a brief while, then briefly again in the evening. Is there another way to use the transaction command that will allow it to handle more data? The results don't have to return overly quickly as it will be run over night to produce reporting.
Hi All,   i have events like below and i want to extract the fields as TotalRecords, SuccessRecords, FailedRecords, Batch, SuccessRecords, FailedRecords,BatchSize, Success, Failed. if the data no... See more...
Hi All,   i have events like below and i want to extract the fields as TotalRecords, SuccessRecords, FailedRecords, Batch, SuccessRecords, FailedRecords,BatchSize, Success, Failed. if the data not there for the even it should show as blank or null.   Item InsertStatus= 'TotalRecords': 1 'SuccessRecords': 1 'FailedRecords': 0 Entity: DevOpsItemAttribute records Batch 1 SuccessRecords=1 FailedRecords=0 EntityData Entity Delete Status BatchSize=50000 Success=26 Failed=0   my output should be like below. TotalRecords, SuccessRecords, FailedRecords, Batch, SuccessRecords, FailedRecords,BatchSize, Success, Failed 1,1,0,null,null,null,null,null null,null,null,1,1,0,null,null,null null,null,null,null,null,null,50000,26,0  
Is it possible to create a Pie Chart from three fields? If so, how?   Thanks a million in advance! 
Is there any documentation on specifically just upgrading a stand alone search head? I found documentation always for a cluster so I just want to make sure I document the correct steps for a change r... See more...
Is there any documentation on specifically just upgrading a stand alone search head? I found documentation always for a cluster so I just want to make sure I document the correct steps for a change request before actually doing it. 
I have a job that runs multiple times if it failed. I need to create a dashboard with a table that shows all the attempts with status.  Logs {id:"1",retrynumber:"1",uniqueid:"23213131",status:"Fa... See more...
I have a job that runs multiple times if it failed. I need to create a dashboard with a table that shows all the attempts with status.  Logs {id:"1",retrynumber:"1",uniqueid:"23213131",status:"Failed"} {id:"1",retrynumber:"2",uniqueid:"43434333",status:"Failed"} {id:"1",retrynumber:"3",uniqueid:"23213132",status:"Failed"} {id:"1",retrynumber:"4",uniqueid:"23213154",status:"Passed"} I want to have table like: id     retry1     retry2     retry3     retry 5 1      Failed      Failed      Failed      Passed  
Hi,  I want to display the error details in the last 30 mins, so they can be investigated, when the amount of errors has increased by 10% from the previous 30mins.  Search 1 This is the search... See more...
Hi,  I want to display the error details in the last 30 mins, so they can be investigated, when the amount of errors has increased by 10% from the previous 30mins.  Search 1 This is the search for the data I want to show in the results    index=myindex source=mysource sourcetype=mysourcetype FailureReason IN ("*Error1*", "*Error2*", "*Error3*") | table ReqReceivedTimestamp, APIName, ReqUrl, ShopName, ResponseCode, FailureReason, FailureServiceCalloutResponse   Search 2 This is the search I have to work out if there are over 10% compared to the last 30 mins   index=myindex source=mysource sourcetype=mysourcetype FailureReason IN ("*Error1*", "*Error2*", "*Error3*") | timechart span=30m count as server | streamstats window=1 current=f values(server) as last30 | eval difference=server-last30 | eval percentage_change=round((difference/last30)*100,2) | eval AboveThreshold=if(round(((server-last30)/last30),4)>.10, "True", null()) | where AboveThreshold = "True" | table percentage_change   I want to understand what is the best way to get these 2 searches combined and show the table from Search 1 when  Search 2 >10%
Hii, good day everyone I need your help please. I need to join a log that gives me the events by date, but I require it from the beginning of the event, to the end of it (the event begins when a car... See more...
Hii, good day everyone I need your help please. I need to join a log that gives me the events by date, but I require it from the beginning of the event, to the end of it (the event begins when a card is entered and ends when the mentioned card is extracted) I have tried to do it by means of Regex from the sourcetype, but not success.
Hello, this is my first experience with SplunkCloud and I would like to ask for some help. I am trying to forward logs from fortinet to my Heavy Forwarder, I have configured UDP port 514 and source... See more...
Hello, this is my first experience with SplunkCloud and I would like to ask for some help. I am trying to forward logs from fortinet to my Heavy Forwarder, I have configured UDP port 514 and sourcetype fortigate_log as per the option presented in datainputs. After the settings and index choice, I started searching for the events but without success. Can you help me configure so that the events appear in both Heavy Forwarder and Splunk Cloud? NOTE: My environment has two Heavy Forwarders and no Deployment server, the communication is direct with the Splunk Cloud.  
Having a hard time understanding what Splunk Observability does that you can't do with Splunk Platform (Cloud or Enterprise)? Aren't you able to take in logs, metrics, and traces and do real-time rep... See more...
Having a hard time understanding what Splunk Observability does that you can't do with Splunk Platform (Cloud or Enterprise)? Aren't you able to take in logs, metrics, and traces and do real-time reporting, monitoring and visualizations with Splunk Platform? And isn't Splunk Platform used to detect and solve issues? Isn't that the same as what Observability does? Thanks a lot for your help in advance!
How do I check which major destinations generate the most logs on a specific firewall host = 10.22.44.254? I would like to know the correct command to know the main destinations and also how to filte... See more...
How do I check which major destinations generate the most logs on a specific firewall host = 10.22.44.254? I would like to know the correct command to know the main destinations and also how to filter without them, to know how much license I would save if I don't receive them?
I have a saved search running every few minutes to append data to a 15 day csv log file within Splunk.  I'm trying to use a timechart with timewrap to compare yesterday's values between 6am and 8pm... See more...
I have a saved search running every few minutes to append data to a 15 day csv log file within Splunk.  I'm trying to use a timechart with timewrap to compare yesterday's values between 6am and 8pm, with the same period a week earlier. If I run the search before 6am, I get exactly what I want - two bell-shaped series on a timechart showing a single day from 6am to 8pm. However, if I run the exact same search after 6am, I get four series on a timechart spanning two days - on the left of the chart a comparison of two series up until 8pm, then a blank period in the middle of the chart from 8pm to 6am the following day, and then on the right, a comparison of two series from 6am.     | inputlookup fifteen_day_logfile.csv | where (_time>=relative_time(now(),"-8d@d+6h") AND _time<=relative_time(now(),"-8d@d+20h")) OR (_time>=relative_time(now(),"-1d@d+6h") AND _time<=relative_time(now(),"-1d@d+20h")) | timechart span=5m cont=false sum(Value) as Value | timewrap 1d     Basically, I'm stumped as to why timewrap is sometimes ignoring the relative_time statements, depending on what time of day it is run. Any help would be much appreciated.
Hi, I have a lookup as follow ip id name 111.111.111.111 111 simone * 222 marco in the index I have  ip id  111.111.111.111 111 222.2... See more...
Hi, I have a lookup as follow ip id name 111.111.111.111 111 simone * 222 marco in the index I have  ip id  111.111.111.111 111 222.222.222.222 222 333.333.333.333 222   the result I'm looking for is the following ip id  name 111.111.111.111 111 simone 222.222.222.222 222 marco 333.333.333.333 222 marco   can help me? thank you Simone
I have the following search:   index=felix_emea sourcetype="Felixapps:prod:log" Action = "Resp_VPMG" | dedup EventIndex | rex field=Message "^<b>(?<Region>.+)<\/b>" | rex "Response Codes:\s(?<... See more...
I have the following search:   index=felix_emea sourcetype="Felixapps:prod:log" Action = "Resp_VPMG" | dedup EventIndex | rex field=Message "^<b>(?<Region>.+)<\/b>" | rex "Response Codes:\s(?<responseCode>\d{1,3})" | rex field=Message ":\s(?<errCount>\d{1,4})$" | FIELDS "Action" "Region" "responseCode" "errCount" | timechart sum(errCount) by Region which is returning the following events: Time Action responseCode Region errCount 21/11/2022 09:46:07 Resp_VPMG 912 VPMG - Wizink PRD-E5 14 21/11/2022 09:16:31 Resp_VPMG 911 Moneta IBS via VPMG 8 21/11/2022 03:02:07 Resp_VPMG 911 Moneta IBS via VPMG 129 21/11/2022 02:46:59 Resp_VPMG 911 Moneta IBS via VPMG 92 20/11/2022 20:31:38 Resp_VPMG 911 Moneta IBS via VPMG 16 20/11/2022 19:31:36 Resp_VPMG 911 Moneta IBS via VPMG 32 20/11/2022 02:26:45 Resp_VPMG 911 Addiko IBS via VPMG 7   and I can display the results on a bar chart like this: but I have no visibility of the 'responseCode' field.    If I copy the data into PowerBI, I can easily get a visualisation like this:   which shows Errors by region and by responseCode (using a PowerBI 'Small Measures', which seems to be the equivalent of Splunk's 'Trellis'). Can I recreate this visualisation in Splunk? Using the Trellis option only allows me to trellis the report by Region and not by responseCode.  Thanks.  Steve  
Hi Everyone, I have 3 pie charts in a panel, showing agent statistics as follows: - 1st pie chart displays overall statistics split by analyst; - 2nd pie chart displays daily statistics split b... See more...
Hi Everyone, I have 3 pie charts in a panel, showing agent statistics as follows: - 1st pie chart displays overall statistics split by analyst; - 2nd pie chart displays daily statistics split by analyst ( | where shift="Day") - 3rd pie chart displays nightly statistics split by analyst ( | where shift="Night"). I've created a drilldown which works fine for the overall pie chart and it correctly displays the data in another panel based on the value of the slice.  To accomplish this I`ve created a token called "tokNames" and assigned an initial value of ALL *. <init> <set token="tokNames">*</set> </init> Drilldown for the Overall pie chart: <drilldown> <set token="tokNames">$click.value$</set> </drilldown> The problem starts with the daily and nightly pie charts - when I click on a name, it displays all the statistics of that particular agent, instead of showing only the daily or only the nightly statistics. Any assistance would be greatly appreciated. Thank you in advance. Toma.
I am getting an error on both of my indexers when they attempt to cluster to the master node   Search peer Splunkindex1 has the following message: failed to register with cluster master reason: ... See more...
I am getting an error on both of my indexers when they attempt to cluster to the master node   Search peer Splunkindex1 has the following message: failed to register with cluster master reason: failed method=POST path=/services/cluster/master/peers/?output_mode=json master=splunkmaster:8089 rv=0 gotConnectionError= 1 gotUnexpectedStatusCode=0 actual_response_code=502 expected_response_code=2xx staus_line="Error connecting: Winsock error 10061" socket_error="Winsock error 10061" remote_error=[event=addPeer status=retrying Add PeerRequest.... Does anyone have a solution for this? Thank you