All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

May I ask how you changed the UF to run as System? Is it simply a case of setting SPLUNK_OS_USER in splunk-launch.conf like it would be on a linux host? ie: SPLUNK_OS_USER=SYSTEM Thank you, and ap... See more...
May I ask how you changed the UF to run as System? Is it simply a case of setting SPLUNK_OS_USER in splunk-launch.conf like it would be on a linux host? ie: SPLUNK_OS_USER=SYSTEM Thank you, and apologies if this is a really lame question.
waiting for reply
Hi All, i have this calculation and at the end iam using where to get only what i need. splunk suggests that put this into search index= xyz AND source=abc AND sourcetype=S1 AND client="BOFA" AND... See more...
Hi All, i have this calculation and at the end iam using where to get only what i need. splunk suggests that put this into search index= xyz AND source=abc AND sourcetype=S1 AND client="BOFA" AND status_code -- how do i get this to get only the status codes that are >=199 and <300 --> these belong to my success bucket >=499 --> These belong to my error bucket | eval Derived_Status_Code= case( status_code>=199 and status_code<300,"Success", status_code>=499,"Errors", 1=1,"Others" ``` I do not need anything that is not in the above conditions ) |Table <> |Where Derived_Status_Code IN ("Errors',"Success") I want to avoid where and get this into search using AND Thank you so much for your time
I'm looking for average CPU utilization from 10+ hosts in a fixed period last month. However, every time I refresh the URL or the metrics, the number changes drastically. However, when I do the same ... See more...
I'm looking for average CPU utilization from 10+ hosts in a fixed period last month. However, every time I refresh the URL or the metrics, the number changes drastically. However, when I do the same for 2 other hosts, the number remains the same between refreshes. Is it because it is doing sampling somewhere? If so,  where can I disable the sampling config?
I'm trying to configuring Splunk Universal Forwarder to send logs to Logstash. I only have access to the Universal Forwarder (not the Heavy Forwarder), and I need to forward audit logs from several ... See more...
I'm trying to configuring Splunk Universal Forwarder to send logs to Logstash. I only have access to the Universal Forwarder (not the Heavy Forwarder), and I need to forward audit logs from several databases, including MySQL, PostgreSQL, MongoDB, and Oracle. So far, I’ve been able to send TCP syslogs to Logstash using the Universal Forwarder. Additionally, I’ve successfully connected to MySQL using Splunk DB Connect but I’m not receiving any logs from it to Logstash. I would appreciate any advice on forward database audit logs through the Universal Forwarder to Logstash in real time or is there any provision of creating a sink or something? Any help or examples would be great! Thanks in advance.
Hi @LizAndy123 , please try this: | rex "project id : (?<Project_Id>\d+) and metadata id : \w+\sis\s:\s(?<Size>\d+) and time taken to upload is: (?<Upload_Speed>\w+)" that you can test at project ... See more...
Hi @LizAndy123 , please try this: | rex "project id : (?<Project_Id>\d+) and metadata id : \w+\sis\s:\s(?<Size>\d+) and time taken to upload is: (?<Upload_Speed>\w+)" that you can test at project id : (?<Project_Id>\d+) and metadata id : \w+\sis\s:\s(?<Size>\d+) and time taken to upload is: (?<Upload_Speed>\w+) Ciao. Giuseppe
Hi, We have data from Change Auditor coming via HEC setup on a Heavy Forwarder. This HF instance was upgraded to Version 9.2.2. After that, I am seeing a difference in the way Splunk displays new ev... See more...
Hi, We have data from Change Auditor coming via HEC setup on a Heavy Forwarder. This HF instance was upgraded to Version 9.2.2. After that, I am seeing a difference in the way Splunk displays new events on SH. It is now converting UTC->PST.  I ran a search for previous week and for those events it is converting timestamp correctly, from UTC-> Eastern.  I am a little confused since both searches are done from same search head against same set of indexers. If there was a TZ issue, woudn't Splunk have converted both incorrectly?  I also ran same searches on indexer with identical output. Recent events in PST whereas older events continue to show as EST. Here are some examples For previous week   Recent. Splunk shows a UTC->PST conversion instead. I did test this manually via Add Data and Splunk is correctly formatting it to Eastern. How can I troubleshoot why recent events in search are showing PST conversion? My current TZ setting on SH is still set to Eastern Time. Also confirmed that system time for HF, indexers and Search Heads is set to Eastern.  Thanks 
I have a log with a sample of the following POST Uploaded File Size for project id : 123 and metadata id : xxxxxxxxxxxx is : 1234 and time taken to upload is: 51ms   So this is project id : 123 S... See more...
I have a log with a sample of the following POST Uploaded File Size for project id : 123 and metadata id : xxxxxxxxxxxx is : 1234 and time taken to upload is: 51ms   So this is project id : 123 Size is 1234 Upload Speed is 51ms I what to extract the project id , size and the upload time as fields  also regarding the upload time I guess I just need the number right.  
Hi @Nicolas2203 , it's a lack in Splunk architecture: there isn't an HA solution for Heavy Forwarders. You have two solutions: install the Add-On on a Search Head Cluster, so the cluster manages a... See more...
Hi @Nicolas2203 , it's a lack in Splunk architecture: there isn't an HA solution for Heavy Forwarders. You have two solutions: install the Add-On on a Search Head Cluster, so the cluster manages add-ons and HA is guaranteed, but many users don't love to have the ingestion systems in the user fornt-end. The second solution is to configure more HFs and manually enable one at a time, but this isn't an automatic recovery solution and yu have to manage checkpoints between HFs. I hint to add a request in Splunk ideas about this. Ciao. Giuseppe
Hi Splunk community, I have a quick question about an app, such as the Microsoft Cloud Services app, in a multiple Heavy Forwarder environment. The app is installed on one Heavy Forwarder and makes... See more...
Hi Splunk community, I have a quick question about an app, such as the Microsoft Cloud Services app, in a multiple Heavy Forwarder environment. The app is installed on one Heavy Forwarder and makes some API calls to Azure to retrieve data from an event hub and store this data in an indexer cluster. If the Heavy Forwarder where the add-on is installed goes down, no logs are retrieved from the event hub. So, what are the best practices for this kind of app, which retrieves logs through API calls, to be more resilient? The same applies to some Cisco add-ons that collect logs from Cisco devices via an API. For now, I will configure the app on another Heavy Forwarder without enabling data collection, but in case of failure, human intervention will be needed. I would be curious to know what solutions you implement for this kind of issue. Thanks Nicolas I'm curious
I am afraid I get the same results even with maxspan
Hi @OgoNARA , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @timtekk , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
@inventsekar I have updated the limits.conf under system/local and it does not impact anything. Issue is still  persist. [default] max_mem_usage_mb = 500 [searchresults] maxresultrows = 86400
Hi @Stives , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Could this just be from different timezones and/or UTC? Can you provide examples of raw events, their _time timestamp (as set when they were indexed) and their _indextime to see if that's where the ... See more...
Could this just be from different timezones and/or UTC? Can you provide examples of raw events, their _time timestamp (as set when they were indexed) and their _indextime to see if that's where the difference is coming from?
Hi Guiseppe, thank you. Finally managed to adjust permissions. The problem was that the user was not properly defined inside of Search/Reporting app. permission. Now it´s fixed. Thank you. BR Sti... See more...
Hi Guiseppe, thank you. Finally managed to adjust permissions. The problem was that the user was not properly defined inside of Search/Reporting app. permission. Now it´s fixed. Thank you. BR Stives 
Hi @OgoNARA , the issue is probably related to a wrong timestamp parsing of your events: your events probably are using the european format (dd/mm/yyyy) and you didn't defined this format in props.... See more...
Hi @OgoNARA , the issue is probably related to a wrong timestamp parsing of your events: your events probably are using the european format (dd/mm/yyyy) and you didn't defined this format in props.conf, but Splunk by default uses the american format (mm/dd/yyyy), so in the first twelve days of the month Splunk read a wrong timestsmp and you have some future events and also some past events. How to solve it: add in the props.conf of these events the correct format in the TIME_PREFIX option. Ciao. Giuseppe
DISA is blocking me so will have to create a work around. Will update when I figure it out.
I got the same trying to extract the file and when I tried it with a previous version 3.7.1. I tried the command line install but didn't have an account it would allow.