All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello All! I'm setting up Splunk Ent. 8.2 on RHEL 8 and wanted to try not utilizing Root as much as possible (Forewarning we are a Windows shop so if this isn't possible please help us understand). ... See more...
Hello All! I'm setting up Splunk Ent. 8.2 on RHEL 8 and wanted to try not utilizing Root as much as possible (Forewarning we are a Windows shop so if this isn't possible please help us understand). I've setup a User called splunk in place of the default admin account it wants to create. We noticed when trying to setup the configuration to port 80 that it would never work, and we ended up having to go back to Port 8000. I've read up and seen that you can utilize Apache as a proxy to get around this, but that was a pretty old document and not sure if it still applies for newer versions of RHEL or if there is a better approach. Any help would be greatly appreciated as we are a small shop and are trying to juggle this in the midst of multiple other projects.  Thanks! 
I'm trying to use our Splunk environment as a replacement for an older syslog server. We have multiple indexers, and we've set up a load-balancer in front of them to handle packets coming on on UDP p... See more...
I'm trying to use our Splunk environment as a replacement for an older syslog server. We have multiple indexers, and we've set up a load-balancer in front of them to handle packets coming on on UDP port 514 and spread the packets out across the indexers. That part works well, but I'm having trouble with the appropriate props and transforms configurations to get those incoming events into the correct indexes. I assume I'm just overlooking something silly, but I need another set of eyes. We're using a small app that's being deployed from a cluster master, to the indexers, with these three configuration files: inputs.conf (yes, port 5140 is intentional, the load balancer handles the port translation)   [udp:5140] disabled = 0 connection_host = ip source = syslog sourcetype = syslog   props.conf:   [source::udp:5140] TRANSFORMS = override_index_f5, override_sourcetype_f5   transforms.conf:   [override_index_f5] SOURCE_KEY = _raw REGEX = (.*)f5-svc-ip=(.*) DEST_KEY = _MetaData:Index FORMAT = f5_connlog [override_sourcetype_f5] SOURCE_KEY = _raw REGEX = (.*)f5-svc-ip=(.*) DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::f5:connection-log   The intent of the above is to take events that look like this: Jul 14 09:22:33 10.24.43.13 LOCAL1.INFO: Jul 14 2021 09:22:33 f5-svc-ip=1.2.3.201, f5-svc-port=636, externalip=2.3.4.91, externalport=13703, internalip=5.6.7.9, internalport=13703 and route them to the "f5_connlog" index with the "f5:connection-log" sourcetype. Instead, these events are landing in the "main" index (since no other index is specified), with the "syslog" sourcetype. I assume that's happening because the events aren't matching, but the regex I'm using is about as simple as can be. (Obviously, once I figure out what I'm doing wrong, there will be more transforms, but this is a small simple test case.) So, wise folks, what am I overlooking? As a related question, is it possible to perform multiple actions on a single match? (In the above, I'm using the same source_key and same regex, so is it possible to combine the sourcetype and index transforms into a single stanza? I know they're two separate things, but it just feels slightly redundant to have to run the same regex twice.)
hello,  i am monitoring windows event logs and ingesting them to my indexers, the issue is that even with a unique EventRecordID i am seeing multiple events in Splunk, sometimes up to 28.  seco... See more...
hello,  i am monitoring windows event logs and ingesting them to my indexers, the issue is that even with a unique EventRecordID i am seeing multiple events in Splunk, sometimes up to 28.  second to that, when i complete the two searches in the picture i can see that the same event is being indexed multiple times (14) between 13:33:31 and 13:36:00 any help on how to rectify this issue is greatly appreciated.  please see attached the two searches showing multiple indexed results and also multiple indexed times. 
Hello,   I'm running Splunk 8.1.2 and I'm trying to group different sources of an Index to count them within one query. The following fields are what I'm trying to group: index: license_complianc... See more...
Hello,   I'm running Splunk 8.1.2 and I'm trying to group different sources of an Index to count them within one query. The following fields are what I'm trying to group: index: license_compliance fields: - prod  - dev - other (anything that does not end in prod or dev)     index=license_compliance OR source="/license_compliance-splunk-data/iCinga_ingest/*" | rex field=source "\/license_compliance-splunk-data\/iCinga_ingest\/(?<log_source>\w)" | eval log_source="iCinga_ingest".log_source | stats dc(source) | dedup source, name | timechart span=1d count(name) by source      The data looks like this currently: "/license_compliance-splunk-data/iCinga_ingest/iCingaDev_2021-07-07.csv" I would like to get something like this:     07/07: iCinga_Prod: 5 iCinga_Dev: 0 iCinga_Other: 2     Thanks in advance!
Hi, I would like to extract the details that is present in the event followed by the event which the search string is present.Below is a sample data and the expected output.[7/14/21 3:00 CDT] 3 IDs ... See more...
Hi, I would like to extract the details that is present in the event followed by the event which the search string is present.Below is a sample data and the expected output.[7/14/21 3:00 CDT] 3 IDs are found for the type 234456 and for the subtype 12334^12344 [7/14/21 3:00 CDT] It is being sent to will@sons [7/14/21 3:01 CDT] It is being sent to william@sons [7/14/21 3:01 CDT] It is being sent to heather@sons Expected Output Type Subtype ID "No.of ID's" 234456 12334 will@sons 3                    12344 william@sons                                   heather@sons Thanks in advance!
We have 8-5 hours daily. How is it possible to learn about the major events that happened over night when we are off line? Thank u in advance.
Hi All, I have come across quite a few old posts around subnet configuration in Hurricane Labs App for Shodan (https://splunkbase.splunk.com/app/1767/) and questions around IP subnet configuration i... See more...
Hi All, I have come across quite a few old posts around subnet configuration in Hurricane Labs App for Shodan (https://splunkbase.splunk.com/app/1767/) and questions around IP subnet configuration in 'Configure Subnets'. Can someone who has already set this up help with the below questions: 1. Is CIDR supported in 'configure subnets' 2. I have a list of around 100 subnets in CIDR notation, how do I go about configuring the same in 'Configure Subnets' (I tried pasting the complete list but I'm not sure that is working). Does the list have to be a separated by Commas? Appreciate any support/guidance anyone can provide.    
Hey, I am sure many of you, who have VPC logs on Splunk have came across this issue.  Raw Log 2 unknown eni-xxxxxxxxxxxxx 192.168.0.10 192.168.0.15 3558 6443 6 9 1196 1625657222 1625657282 ACCEPT ... See more...
Hey, I am sure many of you, who have VPC logs on Splunk have came across this issue.  Raw Log 2 unknown eni-xxxxxxxxxxxxx 192.168.0.10 192.168.0.15 3558 6443 6 9 1196 1625657222 1625657282 ACCEPT OK Text highlighted in red is event start_time, and I want to replace it with _time my props.conf   [aws:cloudwatchlogs:vpcflow] TIME_FORMAT = %s SHOULD_LINEMERGE = false TIME_PREFIX = ^(?>\S+\s){10} MAX_TIMESTAMP_LOOKAHEAD = 10   Still no luck  
Hi We plan to deply Splunk in our company environment, since the encironment is in a dmz network so we need to open the fw ports and whitlist the certain URL for the application.  Could you help to... See more...
Hi We plan to deply Splunk in our company environment, since the encironment is in a dmz network so we need to open the fw ports and whitlist the certain URL for the application.  Could you help to provide the specific URL for downloading add-on from Splunk App site, we need to whitelist and open it from our internal network.  Thank you
Would like to automatically send an email to all email addresses which are the output of a search. My problem is that Splunk is indeed sending an email to all email addresses, like it should, but the... See more...
Would like to automatically send an email to all email addresses which are the output of a search. My problem is that Splunk is indeed sending an email to all email addresses, like it should, but the email body is empty in all cases.  This is the query which I use to send the email (the searchquery is above these line's, it's output is user_name, fullname and email): |table user_name fullname email  |map maxsearches=5000 search=" stats count |eval email=\"$email$\" |eval fullname=\"$fullname$\" |table fullname email  |sendemail to=$result.email$  subject="Subject" message=\"Dear colleague, XXXXXX Kind regards, Tim\" sendresults=true inline=true" The query was created by a colleague of mine, who I can't ask for help anymore since he moved to a different company. Not sure what's wrong with this query. I tried to search the Splunk community and net, but was not able to come up with a solution by myself.
Hi, I want to create a real time alert of about 3000 Messages per secend. I want to create action for each message to create an http alert to another system. my problam is that when I tryed to do ... See more...
Hi, I want to create a real time alert of about 3000 Messages per secend. I want to create action for each message to create an http alert to another system. my problam is that when I tryed to do that i recived about 50-100 messages per second and i got a big delay. what is the best way to handle this throughput? can we use batch in teal time laert?
L.s., I want to get the latency from the input from a forwarder to an index. So whe use the app Meta_woot. It creates an inputlookup file meta-woot. In this file are the latest in-time and host name... See more...
L.s., I want to get the latency from the input from a forwarder to an index. So whe use the app Meta_woot. It creates an inputlookup file meta-woot. In this file are the latest in-time and host names and index names. So far so good. Next is to use this file for calculating  if a host is late or recent or delayed. Those searches are in the app and works fine. But i want a little extension, i want a table with the indexes as leading, and then calculate (by index) the percentage recent/late host, and sum it a one outcome (per index) So far the theory, now my tries. I used below serach. | inputlookup meta_woot where index=* | eval convert_late=(1440*60) | eval convert_delayed=(60*60) | eval last_time=(now()-recentTime) | eval last_time_indexed=case(last_time < convert_delayed, "Recent", last_time > convert_late, "Late", last_time > convert_delayed, "Delayed") | eval compliant_host=if(last_time_indexed="Recent", "1","0") | stats count(compliant_host) as chost by index, compliant_host This gives me a result where the outcome has split into indexname vs compliant_host and chost index compliant_host chost main 0 11 main 1 123 msad 1 6 nmon 1 5 openshift 1 1 temp_log 1 1 wineventlog 1 2   Now the question, how do i calculate the percentage for index main ( (123+11)/11) so i get an percentage value. How do i calculate with values after a stats command?? Pls help Thanx in advance greetz Jari
Hi, I updated my Splunk app in Splunkbase yesterday, receiving the correct Splunk AppInspect certification. However, when installing the app on a Splunk Cloud instance (from the "Browse More Apps")... See more...
Hi, I updated my Splunk app in Splunkbase yesterday, receiving the correct Splunk AppInspect certification. However, when installing the app on a Splunk Cloud instance (from the "Browse More Apps"), version 1.0.0 is installed instead of 1.0.2. The app also shows up as "Last Updated: an hour ago", but still installs the wrong version. I made sure to update the build in the [install] stanza of the app.conf file, as well as the versions in the app.conf and app.manifest. Any ideas?  
Hi, I just finished creating my Splunk App, however for some reason the app isn't available for Splunk Enterprise (only Splunk Cloud) - here is an image of the "compatibility" section in Splunkbase.... See more...
Hi, I just finished creating my Splunk App, however for some reason the app isn't available for Splunk Enterprise (only Splunk Cloud) - here is an image of the "compatibility" section in Splunkbase. When I try to find the app in the Enterprise App "Store" it isn't there. Something I thought might have happened  was that I didn't add the following section to the manifest. "platformRequirements": { "splunk": { "Enterprise": "*" } },  However even after adding it, the app still isn't available for Splunk Enterprise.. Has this happened to anyone?
Hello! Can anyone please lend a hand with this issue? I'm still fairly new to this and am working my way through Fundamentals 2. Scenario: Sophos Central antivirus clients installed on Linux and Wi... See more...
Hello! Can anyone please lend a hand with this issue? I'm still fairly new to this and am working my way through Fundamentals 2. Scenario: Sophos Central antivirus clients installed on Linux and Windows. Using a Heavy Forwarder to pull Sophos Central logs via an API into a dedicated antivirus index. These logs lack the product_version needed to populate the "Malware Operations - Clients By Product Version" dashboard panel. I've found the data I need in two places. A log entry in /var/log/messages which is being ingested to the unix index and looks like this: Jul 13 03:59:37 server-name savd[5860]: update.updated: Updated to versions - SAV: 10.5.2, Engine: 3.79.0, Data: 5.85 And in a file /opt/sophos-av/engine/suiteVersion which is readable by the splunk user and contains: 10.5.2.3790.203   I used the field extractor to create a regex string that matches the log entry and extracts the product_version. I've created a custom app with these files in it, pushed from the Deployment Server onto one host, and pushed from the Deployer to our Enterprise Security Search Head Cluster: props.conf [syslog] EXTRACT-date,time,host,process_name,product_version = ^(?P<date>\w+\s+\d+)\s+(?P<time>[^ ]+)\s+(?P<host>[^ ]+)\s+(?P<process_name>\w+)(?:[^ \n]* ){6}(?P<product_version>\SAV:\s\d*\.\d*\.\d*,\sEngine:\s\d*\.\d*\.\d*,\sData:\s\d*.\d*.+) eventtypes.conf [product_version] search=product_version=* tags.conf [eventtype=product_version] malware = enabled operations = enabled When I search the unix index I can see the product_version field in the search results and the tags. Questions: 1.) How do I copy these events into the antivirus index and is this necessary? (I'm thinking of efficiency as the cim_Malware_indexes macro contains the antivirus and firewall indexes which are much smaller than the unix index). 2.) How do I get the product_version to show in the "Clients By Product Version" panel which uses this search? | `malware_operations_tracker(time_product_version)` | search | stats dc(dest) by product_version | sort 10 - dc(dest) 3.) Is there a better way to do this? Any help is appreciated.
Hi have log like below: _time                                                source cpu_load_percent process pctCPU cpuTIME   PID 7/14/21 1:59:41.000 PM top          5.6                           ... See more...
Hi have log like below: _time                                                source cpu_load_percent process pctCPU cpuTIME   PID 7/14/21 1:59:41.000 PM top          5.6                                     java           5.6          1:49.46     125353 here is my SPL index="main" pctCPU="*" process="java" pctCPU>0 I have 3 java process that has uniq PID, Now I want to get timechart that show pctCPU of maximum PID. Any idea? Thanks  
I'm looking to enable Workload Management for Splunk and I'm just trying to understand whether this is fully supported on a deployment which is using cgroups v2. Based on the documentation, the word... See more...
I'm looking to enable Workload Management for Splunk and I'm just trying to understand whether this is fully supported on a deployment which is using cgroups v2. Based on the documentation, the wording on the Configure Linux systemd for workload management page suggests that Splunk understands cgroups v1: CPU: /sys/fs/cgroup/cpu/system.slice/<SPLUNK_SERVER_NAME>.service Memory: /sys/fs/cgroup/memory/system.slice/<SPLUNK_SERVER_NAME>.service But there is nothing (except the diagram) that suggests it can operate under cgroups v2 as well. Does anyone know whether Splunk fully supports cgroups v2, or will our deployment need to be cgroups v1? Edit: Some investigation suggests that Splunk fails to start with Workload Management enabled on a cgroups v2 host: Jul 15 10:30:50 hostnamehere splunk[16311]: Couldn't open dir /sys/fs/cgroup/cpu/: No such file or directory Jul 15 10:30:50 hostnamehere splunk[16311]: Error perform systemd preparation: No such file or directory
Hi community, I can get 2126 events in the past 7 days with the following statement. index=* "*Error Sending SMS : org.springframework.web.client.HttpServerErrorException: 500 Internal Server Err... See more...
Hi community, I can get 2126 events in the past 7 days with the following statement. index=* "*Error Sending SMS : org.springframework.web.client.HttpServerErrorException: 500 Internal Server Error*"     One of the traceid in the events is traceid: 3312e53cb50bfe4c. And with the traceid I get from the above events,  I enter the following statement in the search box and search. index=* 3312e53cb50bfe4c     What I want is 010d8aff-16a8-4f69-82ea-59484741432e under cf_app_name: user-profile-metadata-prod, which is the GUID I mentioned in the title. The question is that there are 2126 traceids, how do I get out these "GET /api/users/v2/<GUIDs> " from traceids ?  Best regards, Madoc
Hi, I need to download the Splunk Universal Forwarder Linux 32-bit for the version Splunk Enterprise 8.2.1,can you help me? Thanks in advance, Monica Corso
Hi. I have Splunk on windows network, and collecting data using UF from clients. I need to make a report for newly installed application on clients. I am searching for event id 11707 and also 1033... See more...
Hi. I have Splunk on windows network, and collecting data using UF from clients. I need to make a report for newly installed application on clients. I am searching for event id 11707 and also 1033, but it seems these event are being logged only if we use Windows Installer. For example, we installed Notepad++ on a client, and we do not have any event for that. Can someone please advise? Thank you.