All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is there a way to assign workload pools to certain roles? Like say - we have 2 types of users. TypeA and TypeB users. Can TypeA users be assgined only to limited_perf? And Type B Users assigned to
hi, please i would like to ask for help to determine how to convert the timezone of events i am indexing with the gcp cloud plattform add-on before they are indexed. Events arrive today in UTC, and I... See more...
hi, please i would like to ask for help to determine how to convert the timezone of events i am indexing with the gcp cloud plattform add-on before they are indexed. Events arrive today in UTC, and I need to convert that time to gmt-4. I have been trying from the sourcetype configuration, without success. I was also seeing this link, without much success https://docs.splunk.com/Documentation/Splunk/8.2.1/Data/Applytimezoneoffsetstotimestamps
I created a saved search and a trigger upon completion to send myself an email that links to the finished job. It works mostly as intended by default, but it directs to my junk folder because the sen... See more...
I created a saved search and a trigger upon completion to send myself an email that links to the finished job. It works mostly as intended by default, but it directs to my junk folder because the sender is "splunk" with no @domain.com. This would just be a minor inconvenience if the report was just going to go to me, but we are planning on rolling it out to several users and want to fix this. I initally tried going to the report Advanced Settings and changed the "action.email.from" from just "splunk" to "splunk@ourcompany.com".  At that point the emails stopped coming completely. I checked the log at  splunk/var/log/splunk/python.log for errors, but the only ones I have encountered are due to size limits which were resolved by removing the inline table. I have also gone to Server settings » Email settings and changed the "Send emails as" also to add my company domain. It appears the only emails that are coming through are the ones with the specific report "Advanced Settings" "action.email.from" field set to just "splunk" without the domain, but they are still going to my junk folder and are unable to be marked as a safe sender because there is no domain on the address. Could someone assist with troubleshooting this issue and how to add a domain to the sender address correctly? Thanks.
Hi All.  Hope everyone doing well.  we are sending data from demisto to Splunk. But here when data came to Splunk it is indexing cumulatively like yesterday we got 10 incidents and it was index... See more...
Hi All.  Hope everyone doing well.  we are sending data from demisto to Splunk. But here when data came to Splunk it is indexing cumulatively like yesterday we got 10 incidents and it was indexed yesterday. today 5 incidents and when indexing the data today it is indexing yesterday's 10 incidents along with todays 5 incident details. here we are getting the cumulative results. Kindly help me with the same.  Thanks In Advance Balaji  
Any ideas for troubleshooting clamav not showing up in the linux dashbord?
Hi,   I am getting the bellow error:   editTracker failed, reason='Unable to connect to license master=https://192.168.0.21:8189 Error connecting: SSL not configured on client' after running: s... See more...
Hi,   I am getting the bellow error:   editTracker failed, reason='Unable to connect to license master=https://192.168.0.21:8189 Error connecting: SSL not configured on client' after running: sudo ./splunk edit licenser-localslave -master_uri https://192.168.0.21:8189 I have tried removing the SSLpassword from  system/local/server.conf however this has not worked. Thanks   Joe
So we have a search creating a notable event.  The search is configured to suppress for 2 days.  The search is managed in an a Splunk app.  If we install a new version of the app, or make any changes... See more...
So we have a search creating a notable event.  The search is configured to suppress for 2 days.  The search is managed in an a Splunk app.  If we install a new version of the app, or make any changes to the search, the throttle appears to be reset. My question is if it is possible to preserve the throttling  between app installs or changes to the search?  Ultimately what we want to avoid is changes to the app causing duplicate notable events from being created.
Hello, when user clicks on panel for drill-down, it shows relevant record in new window. I am looking to hide the search query to end- user ? I could see some params in drill-down url display.pag... See more...
Hello, when user clicks on panel for drill-down, it shows relevant record in new window. I am looking to hide the search query to end- user ? I could see some params in drill-down url display.page.search.mode=verbose dispatch.sample_ratio=1 display.general.type=statistics But nothing related to hiding search query. Is this feasible by passing any additional parameter to the drill-down. here's sample dashboard to explain in more detail. <dashboard> <label>test</label> <description>Test dashboard</description> <row> <panel> <chart> <title>Stats in pie chart</title> <search> <query>index=_internal sourcetype=splunkd log_level=ERROR | stats count by host</query> <earliest>-60m@m</earliest> <latest>now</latest> </search> <option name="charting.chart">pie</option> <option name="charting.drilldown">all</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> </dashboard> when you click on on slice of the pie chart. it opens a new tab where search query visible to user which i would like to hide it to end user.
In the above attachment , I created graph which shows hourly maximum response time with respect to request response pair .Now in drilldown  when I click on any slot maximum response time(marked i... See more...
In the above attachment , I created graph which shows hourly maximum response time with respect to request response pair .Now in drilldown  when I click on any slot maximum response time(marked in yellow) ,I want to show the logs of that request response pair(2 events will be there in result) only which has this maximum value. Query used for graph : index=salcus sourcetype= ticket_mgmt_rest source= http:ticket_mgmt_rest |rename "properties.o2-TroubleTicket-ReqId" as REQID | transaction REQID keepevicted=true | search eventcount=2 | timechart span=1h eval(round(max(duration),3)) as MaxRespTime count by sourcetype|fillnull
Hi All, I am looking for a little help with a search today. I am looking to create an alert based on this search that only triggers when the same hostname comes up twice in a row.  so far the searc... See more...
Hi All, I am looking for a little help with a search today. I am looking to create an alert based on this search that only triggers when the same hostname comes up twice in a row.  so far the search I have is I am unsure how to include/return two machines of the same name: index="login_pi" source="WinEventLog:LoginPI Events" SourceName="Application Threshold Exceeded" | rex field=_raw "Actual value\\\":\s+\\\"(?<actual_value>\d+)" | search actual_value>=10 | table Target,actual_value,ApplicationName,Title here is an example event: 07/14/2021 10:39:49 AM LogName=LoginPI Events EventCode=800 EventType=4 ComputerName=RNBSVSIMGT02.rightnetworks.com SourceName=Application Threshold Exceeded Type=Information RecordNumber=1786721 Keywords=Classic TaskCategory=None OpCode=Info Message={ "Description": "Measurement duration (7.561s) exceeded threshold of 5s (51.22%)", "Actual value": "7.561", "Threshold value": "5", "Measurement": "quickbooksopen_2021", "Locale": "English (United States)", "RemotingProtocol": "Rdp", "Resolution": "1920 × 1080", "ScaleFactor": "100%", "Target": "BPOQCP01S01", "TargetOS": "Microsoft Windows Server 2016 Standard 10.0.14393 (1607)", "AppExecutionId": "4ed43186-648c-4e8e-96ee-9e4b52e468cb", "AccountId": "a4a6655b-f7ac-4783-aec5-698a146eb2cf", "AccountName": "rightnetworks\\eloginpi082", "LauncherName": "RNBSVSI23", "EnvironmentName": "BPOQCP01S01", "EnvironmentId": "bc31c8f6-e8c0-4278-93c3-08d8040960f8", "ApplicationName": "QB_2021_Open", "ApplicationId": "ece9c6b9-6662-45be-970d-2708603ca13b", "Title": "Application threshold exceeded" }
Hello All! I'm setting up Splunk Ent. 8.2 on RHEL 8 and wanted to try not utilizing Root as much as possible (Forewarning we are a Windows shop so if this isn't possible please help us understand). ... See more...
Hello All! I'm setting up Splunk Ent. 8.2 on RHEL 8 and wanted to try not utilizing Root as much as possible (Forewarning we are a Windows shop so if this isn't possible please help us understand). I've setup a User called splunk in place of the default admin account it wants to create. We noticed when trying to setup the configuration to port 80 that it would never work, and we ended up having to go back to Port 8000. I've read up and seen that you can utilize Apache as a proxy to get around this, but that was a pretty old document and not sure if it still applies for newer versions of RHEL or if there is a better approach. Any help would be greatly appreciated as we are a small shop and are trying to juggle this in the midst of multiple other projects.  Thanks! 
I'm trying to use our Splunk environment as a replacement for an older syslog server. We have multiple indexers, and we've set up a load-balancer in front of them to handle packets coming on on UDP p... See more...
I'm trying to use our Splunk environment as a replacement for an older syslog server. We have multiple indexers, and we've set up a load-balancer in front of them to handle packets coming on on UDP port 514 and spread the packets out across the indexers. That part works well, but I'm having trouble with the appropriate props and transforms configurations to get those incoming events into the correct indexes. I assume I'm just overlooking something silly, but I need another set of eyes. We're using a small app that's being deployed from a cluster master, to the indexers, with these three configuration files: inputs.conf (yes, port 5140 is intentional, the load balancer handles the port translation)   [udp:5140] disabled = 0 connection_host = ip source = syslog sourcetype = syslog   props.conf:   [source::udp:5140] TRANSFORMS = override_index_f5, override_sourcetype_f5   transforms.conf:   [override_index_f5] SOURCE_KEY = _raw REGEX = (.*)f5-svc-ip=(.*) DEST_KEY = _MetaData:Index FORMAT = f5_connlog [override_sourcetype_f5] SOURCE_KEY = _raw REGEX = (.*)f5-svc-ip=(.*) DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::f5:connection-log   The intent of the above is to take events that look like this: Jul 14 09:22:33 10.24.43.13 LOCAL1.INFO: Jul 14 2021 09:22:33 f5-svc-ip=1.2.3.201, f5-svc-port=636, externalip=2.3.4.91, externalport=13703, internalip=5.6.7.9, internalport=13703 and route them to the "f5_connlog" index with the "f5:connection-log" sourcetype. Instead, these events are landing in the "main" index (since no other index is specified), with the "syslog" sourcetype. I assume that's happening because the events aren't matching, but the regex I'm using is about as simple as can be. (Obviously, once I figure out what I'm doing wrong, there will be more transforms, but this is a small simple test case.) So, wise folks, what am I overlooking? As a related question, is it possible to perform multiple actions on a single match? (In the above, I'm using the same source_key and same regex, so is it possible to combine the sourcetype and index transforms into a single stanza? I know they're two separate things, but it just feels slightly redundant to have to run the same regex twice.)
hello,  i am monitoring windows event logs and ingesting them to my indexers, the issue is that even with a unique EventRecordID i am seeing multiple events in Splunk, sometimes up to 28.  seco... See more...
hello,  i am monitoring windows event logs and ingesting them to my indexers, the issue is that even with a unique EventRecordID i am seeing multiple events in Splunk, sometimes up to 28.  second to that, when i complete the two searches in the picture i can see that the same event is being indexed multiple times (14) between 13:33:31 and 13:36:00 any help on how to rectify this issue is greatly appreciated.  please see attached the two searches showing multiple indexed results and also multiple indexed times. 
Hello,   I'm running Splunk 8.1.2 and I'm trying to group different sources of an Index to count them within one query. The following fields are what I'm trying to group: index: license_complianc... See more...
Hello,   I'm running Splunk 8.1.2 and I'm trying to group different sources of an Index to count them within one query. The following fields are what I'm trying to group: index: license_compliance fields: - prod  - dev - other (anything that does not end in prod or dev)     index=license_compliance OR source="/license_compliance-splunk-data/iCinga_ingest/*" | rex field=source "\/license_compliance-splunk-data\/iCinga_ingest\/(?<log_source>\w)" | eval log_source="iCinga_ingest".log_source | stats dc(source) | dedup source, name | timechart span=1d count(name) by source      The data looks like this currently: "/license_compliance-splunk-data/iCinga_ingest/iCingaDev_2021-07-07.csv" I would like to get something like this:     07/07: iCinga_Prod: 5 iCinga_Dev: 0 iCinga_Other: 2     Thanks in advance!
Hi, I would like to extract the details that is present in the event followed by the event which the search string is present.Below is a sample data and the expected output.[7/14/21 3:00 CDT] 3 IDs ... See more...
Hi, I would like to extract the details that is present in the event followed by the event which the search string is present.Below is a sample data and the expected output.[7/14/21 3:00 CDT] 3 IDs are found for the type 234456 and for the subtype 12334^12344 [7/14/21 3:00 CDT] It is being sent to will@sons [7/14/21 3:01 CDT] It is being sent to william@sons [7/14/21 3:01 CDT] It is being sent to heather@sons Expected Output Type Subtype ID "No.of ID's" 234456 12334 will@sons 3                    12344 william@sons                                   heather@sons Thanks in advance!
We have 8-5 hours daily. How is it possible to learn about the major events that happened over night when we are off line? Thank u in advance.
Hi All, I have come across quite a few old posts around subnet configuration in Hurricane Labs App for Shodan (https://splunkbase.splunk.com/app/1767/) and questions around IP subnet configuration i... See more...
Hi All, I have come across quite a few old posts around subnet configuration in Hurricane Labs App for Shodan (https://splunkbase.splunk.com/app/1767/) and questions around IP subnet configuration in 'Configure Subnets'. Can someone who has already set this up help with the below questions: 1. Is CIDR supported in 'configure subnets' 2. I have a list of around 100 subnets in CIDR notation, how do I go about configuring the same in 'Configure Subnets' (I tried pasting the complete list but I'm not sure that is working). Does the list have to be a separated by Commas? Appreciate any support/guidance anyone can provide.    
Hey, I am sure many of you, who have VPC logs on Splunk have came across this issue.  Raw Log 2 unknown eni-xxxxxxxxxxxxx 192.168.0.10 192.168.0.15 3558 6443 6 9 1196 1625657222 1625657282 ACCEPT ... See more...
Hey, I am sure many of you, who have VPC logs on Splunk have came across this issue.  Raw Log 2 unknown eni-xxxxxxxxxxxxx 192.168.0.10 192.168.0.15 3558 6443 6 9 1196 1625657222 1625657282 ACCEPT OK Text highlighted in red is event start_time, and I want to replace it with _time my props.conf   [aws:cloudwatchlogs:vpcflow] TIME_FORMAT = %s SHOULD_LINEMERGE = false TIME_PREFIX = ^(?>\S+\s){10} MAX_TIMESTAMP_LOOKAHEAD = 10   Still no luck  
Hi We plan to deply Splunk in our company environment, since the encironment is in a dmz network so we need to open the fw ports and whitlist the certain URL for the application.  Could you help to... See more...
Hi We plan to deply Splunk in our company environment, since the encironment is in a dmz network so we need to open the fw ports and whitlist the certain URL for the application.  Could you help to provide the specific URL for downloading add-on from Splunk App site, we need to whitelist and open it from our internal network.  Thank you
Would like to automatically send an email to all email addresses which are the output of a search. My problem is that Splunk is indeed sending an email to all email addresses, like it should, but the... See more...
Would like to automatically send an email to all email addresses which are the output of a search. My problem is that Splunk is indeed sending an email to all email addresses, like it should, but the email body is empty in all cases.  This is the query which I use to send the email (the searchquery is above these line's, it's output is user_name, fullname and email): |table user_name fullname email  |map maxsearches=5000 search=" stats count |eval email=\"$email$\" |eval fullname=\"$fullname$\" |table fullname email  |sendemail to=$result.email$  subject="Subject" message=\"Dear colleague, XXXXXX Kind regards, Tim\" sendresults=true inline=true" The query was created by a colleague of mine, who I can't ask for help anymore since he moved to a different company. Not sure what's wrong with this query. I tried to search the Splunk community and net, but was not able to come up with a solution by myself.