All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to upgrade my splunk version to 9.0.4.1, but am running into the following error when running ./splunk start: An error occurred: Failed to run splunkd rest: stdout: -- stderr: -- Ha... See more...
I am trying to upgrade my splunk version to 9.0.4.1, but am running into the following error when running ./splunk start: An error occurred: Failed to run splunkd rest: stdout: -- stderr: -- Has anyone else run into this problem? 
Hi everybody,  I am using Splunk Cloud since few days and I am stuck using token filtering in dashboard studio.  My CSV called "vulnevolution.csv" is like this :  Date                         Team... See more...
Hi everybody,  I am using Splunk Cloud since few days and I am stuck using token filtering in dashboard studio.  My CSV called "vulnevolution.csv" is like this :  Date                         Team             nbvuln  01/01/2022          SSI               27038 01/01/2022          IT                   175600 01/02/2022          SSI                22733 01/02/2022          IT                   187273 I want to create a line dashboard that displays th nbvuln per time per team  | inputlookup append=t vulnevolution.csv | xyseries Date Team "nbvuln" | makecontinuous It works perfectly fine    BUT :  I want to create a filtering token called teaminfra to perform a filter and only show the data of this team.  The name of the token is teaminfra Its values are static IT and SSI   Do you have any idea how I could do that ? I have the same request for an other CSV file where the Team column is exactly the same so I am planning to use this token for both dashboard (the second one is a table where I display comments) Thank you everyone for helping me in advance.  I tried to find this subject on the forum already, I found some explanation using dedup but I did not figure out how to make it work in my case.    Best regards,    Alexandre    
HI All, I am stuck with one issue for a event line breaking. I have an environment in which UF sending logs from HF to IDX. I created props.conf in HF and Indexer both. and it worked. The stanzas i... See more...
HI All, I am stuck with one issue for a event line breaking. I have an environment in which UF sending logs from HF to IDX. I created props.conf in HF and Indexer both. and it worked. The stanzas i provided in the props.cong is working exactly the way i wanted, but after sometime, some of the events appears in unstructured format again...and after sometime it worked exactly the way i want. Its kind of a bizarre scenario where when i am testing it works fine just at the time of presenting it to the client it stopped working again...and after few minutes it started working fine again :P. Can anyone help me in finding the root cause for inconsistent performance of props.conf Note- The stanzas are fine and its breaking the events the way i want but something is stopping it to work consistently  
Hello, thank you in advance for your feedback. I would like to sort the date so that my graph is coherent, can you please help me?   | tstats summariesonly=t allow_old_summaries=t count f... See more...
Hello, thank you in advance for your feedback. I would like to sort the date so that my graph is coherent, can you please help me?   | tstats summariesonly=t allow_old_summaries=t count from datamodel=Authentication.Authentication where [| inputlookup ****.csv | eval host=Machine | table host ] AND NOT Authentication.src_user="*$" AND NOT Authentication.src_user="unknown" by host,Authentication.src,Authentication.src_user,_time | eval host=upper(host) | eval Serveur='Authentication.src' | eval **** = upper(trim(('Authentication.src_user'))) | eval samaccountname=substr(trim(upper('Authentication.src_user')),1,7) | eval domaine="****" | lookup **** samaccountname as samaccountname domaine as domaine | search email="*" | eval **** = samaccountname | table ****, ****, host, email,ua,cn,_time,cn | join host type=left [| inputlookup *****.csv | eval host=Machine] | where Ferme="****" OR Ferme="****" OR Ferme="*****" OR Ferme="*****" OR Ferme="****" | stats values(Ferme) as Ferme values(_time) as _time by *****,cn | eval Date=strptime(_time,"%d/%m/%Y") | sort Date | eval Date=strftime(_time,"%d/%m/%Y") | stats count as "nb. de connexion par jour" by Ferme,Date      
Hi there! I was wondering if there's a specific app available in Splunk Enterprise Security that can provide CPU information. Specifically, I'm interested in getting process utilization info from a... See more...
Hi there! I was wondering if there's a specific app available in Splunk Enterprise Security that can provide CPU information. Specifically, I'm interested in getting process utilization info from an Mfg server.Request for information on CPU information app in Splunk Enterprise Security.
I am sampling the logs of the last 24 hours in GUI by 1. search queries: index=* 2. On GUI timeframe options, select last24 hours 3. Click search 4. Search completed 5. Export results to csv... See more...
I am sampling the logs of the last 24 hours in GUI by 1. search queries: index=* 2. On GUI timeframe options, select last24 hours 3. Click search 4. Search completed 5. Export results to csv   In the csv obtained, it is seen that it parsed all of the fields in each event log into a new column, resulting too many columns in the csv. I would only like to export _raw, timestamp, host into the csv. Would there be any suggestion? Thank you.
Hi, I am doing statistical analysis on a number of indexes for time series forecasting. On reading the following article, its gives a sample SPL query as follows: | gentimes start=”01/01/2018" i... See more...
Hi, I am doing statistical analysis on a number of indexes for time series forecasting. On reading the following article, its gives a sample SPL query as follows: | gentimes start=”01/01/2018" increment=1h | eval _time=starttime, loc=0, scale=20 | normal loc=loc scale=scale | streamstats count as cnt | eval gen_normal = gen_normal + cnt | table _time, gen_normal | rename gen_normal as “Non-stationary time series (trend)” [Article is this: ]https://towardsdatascience.com/time-series-forecasting-with-splunk-part-i-intro-kalman-filter-46e4bff1abff The "normal" command is a cutom external command and I wanted to ask how and where I can get such statistical functions into Splunk? Many thanks as always,
Hi Splunkers, I’m doing a trend analysis and showcasing colors for one of the panels in my dashboard which is required by my management. But for some reason, colors not reflecting when I try to tak... See more...
Hi Splunkers, I’m doing a trend analysis and showcasing colors for one of the panels in my dashboard which is required by my management. But for some reason, colors not reflecting when I try to take a pdf out of it.   As a part of my job duties, I have to send over dashboard pdfs to team members. Please kindly advice in how I can fix this.  P.S: Attaching an image of the dashboard panel so the field trend showing up but not the colors when I take pdf.
Hi Splunkers, does anyone have an idea how to configure a preferred path on a Splunk Forwarder? I have 2 datacenters with many UF sending data in balanced mode to local indexers. I would like to c... See more...
Hi Splunkers, does anyone have an idea how to configure a preferred path on a Splunk Forwarder? I have 2 datacenters with many UF sending data in balanced mode to local indexers. I would like to configure UF to automatically send data to indexers in the other DC only in case local ones are unavailable (maintenance / crash / ...)   Thanks in advance Any suggestion will be very appreciated
Hi Folks, We have a Splunk instance which comprises of 1 SH, 2 Indexers in cluster (Replication enabled), 1 Cluster Master and 1 Heavy forwarder.   We need to migrate these servers to a new set of ... See more...
Hi Folks, We have a Splunk instance which comprises of 1 SH, 2 Indexers in cluster (Replication enabled), 1 Cluster Master and 1 Heavy forwarder.   We need to migrate these servers to a new set of servers. I would like to know the steps involved in migrating the already indexed clustered data. I am little newbie in Splunk Administration so would appreciate if someone can help me with detailed steps and points that i need to take care of while migrating the data. Thanks in advance.
Hi, we have to monitor some jobs in which One Job could have multiple sub task.  It could be nested dependency as well.  One task depended on other and that other dependent on some other.... I ... See more...
Hi, we have to monitor some jobs in which One Job could have multiple sub task.  It could be nested dependency as well.  One task depended on other and that other dependent on some other.... I am looking to correlate these dependency and want to see how much time job took end to end. In below example.  1st task dependent on 2nd, 2nd dependent on 3rd, 4th is End of task.    
Hi Everyone, I`m learning about the Splunk REST API and I`m experiencing some temperamental behaviour, for example I can fetch results using the query listed below from some reports, but it fails fo... See more...
Hi Everyone, I`m learning about the Splunk REST API and I`m experiencing some temperamental behaviour, for example I can fetch results using the query listed below from some reports, but it fails for others, example below:   curl -k -H "Authorization: Splunk myValidToken" https://myValidDomainName.splunkcloud.com:8089/services/saved/searches/%5BLOOKUP%5D%20Active%20Directory%20Devices%20No2/acl    Response:   <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Could not find object id=[LOOKUP] Active Directory Devices No2</msg> </messages> </response>   The report name is correct. Have you got any suggestions for me ? Many thanks, Toma
We are trying to monitor a SQL DB in an Azure cloud VM that is making use of Bastion. Our DB agent in on-prem on a Windows server. The Bastion documentation claims that you would not need a public ... See more...
We are trying to monitor a SQL DB in an Azure cloud VM that is making use of Bastion. Our DB agent in on-prem on a Windows server. The Bastion documentation claims that you would not need a public facing IP for the VM. We are still trying to figure out what the Host or connection details would be then for the DB on the VM to allow the AppD DB agent to monitor the DB. Hope someone else has had some experience with this. Regards Dietrich 
We encountered an issue with with the v23.2 of the .Net agent for Windows where the Agent Config tool does not detect any IIS services. On the support ticket they just asked us to reinstall the agen... See more...
We encountered an issue with with the v23.2 of the .Net agent for Windows where the Agent Config tool does not detect any IIS services. On the support ticket they just asked us to reinstall the agent, but later confirmed that it was a bug and there has been a new agent version released on 4 April 2023. (356531) Just wanted to point this out to anyone possibly still using the older agent version and to make people aware that it was just a bug. Going to an older agent version also worked for us. Agent with bug Older agent that did not have issue
@splunk Enterprise Security Team, We need the API access Key of the Splunk Enterprise security app, so please help us where we can find it in the application setting. And we can integrate BeyondTrust... See more...
@splunk Enterprise Security Team, We need the API access Key of the Splunk Enterprise security app, so please help us where we can find it in the application setting. And we can integrate BeyondTrust PAM with Splunk for syslog analysis.
How to extract fields in between | servername | Which i am using in rex  ^[^\|\n]*\|(?P<Server>\w+\.\w+\.\w+\.\w+\s+) But its not extracting the all the servers 05-Apr-2023 04:42:44:PM: |IISN11WC... See more...
How to extract fields in between | servername | Which i am using in rex  ^[^\|\n]*\|(?P<Server>\w+\.\w+\.\w+\.\w+\s+) But its not extracting the all the servers 05-Apr-2023 04:42:44:PM: |IISN11WCRL02.nnp.anp.co.xx | Ping statistics for 10.10.10.194: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 54ms, Maximum = 54ms, Average = 54ms 4/5/23 4:42:41.000 AM 05-Apr-2023 04:42:41:PM: |IISN11WCRL02.nnp.anp.co.xx | Ping statistics for 10.10.10.194: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 54ms, Maximum = 57ms, Average = 54ms 4/5/23 4:42:38.000 AM 05-Apr-2023 04:42:38:PM: |IIISN11WCRL02.nnp.anp.co.xx | Ping statistics for 10.10.10.194: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 53ms, Maximum = 54ms, Average = 53ms 4/5/23 4:42:34.000 AM 05-Apr-2023 04:42:34:PM: |naz11sry001l | Ping statistics for 10.10.10.194: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 53ms, Maximum = 55ms, Average = 54ms 4/5/23 4:42:31.000 AM 05-Apr-2023 04:42:31:PM: |naz11sry002l | Ping statistics for 10.10.10.194: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 53ms, Maximum = 55ms, Average = 54ms
We have recently switched from email alerts to PagerDuty alerting. With this switch, the link to search results has broken. When the alert comes in through email, it gives a link that includes ....... See more...
We have recently switched from email alerts to PagerDuty alerting. With this switch, the link to search results has broken. When the alert comes in through email, it gives a link that includes ....scheduler__user__search__abc__at_123__ and the splunk query contains |loadjob scheduler__user__search__abc_at_123__ When the alert comes in through PagerDuty, using $results_link$ it gives a link that includes ...schedulerusersearchabc__at__123 and the splunk query contains |loadjob schedulerusersearchabc_at_123__ . When this search is ran it gives an error that the alphabet soup job ID couldnt be found Is there a way to adjust $results_link$ to insert the __ in between sections manually?
This is mostly about the App Compliance Essentials and i had one other question first iam getting an error see below but I have all supporting apps installed but don't know how to fix the problem. ... See more...
This is mostly about the App Compliance Essentials and i had one other question first iam getting an error see below but I have all supporting apps installed but don't know how to fix the problem.  Lastly do the apps in shcluster/apps all need to have a default folder with a app.conf file in them? I have noticed most of the apps that properly present themselves on my SH have these folders and files.  
Trying to install the HashiCorp Vault App on Splunk Cloud and I'm getting the following error: "Error downloading update from https://splunkbase.splunk.com/app/5093/release/1.0.3/download/: Forbid... See more...
Trying to install the HashiCorp Vault App on Splunk Cloud and I'm getting the following error: "Error downloading update from https://splunkbase.splunk.com/app/5093/release/1.0.3/download/: Forbidden" What gives?
How do I go about sharing my virtual tenant with my other users? Ultimately, I would like for this this one tenant to autoload when selecting TrackMe from the Apps dropdown.