All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi! I have an environment with a single indexer+srchead and approximately 100 UFs sending logs to it. Some of them are quite active log producers. I have realized that invoking a splunk restart to a... See more...
Hi! I have an environment with a single indexer+srchead and approximately 100 UFs sending logs to it. Some of them are quite active log producers. I have realized that invoking a splunk restart to activate a config change will usually cause data loss for a period of 1-2 minutes for some of the sources, which is something  I would rather avoid. It should be doable by stopping the UFs first, then restarting the indexer and finally starting the UFs again. Is there another way? I figure the data originating from log files monitored by UFs gets sent to the indexer, but the poor indexers don't know the indexer is unable to process the last bits and the UF thinks the data was properly received and processed. I read about Persistent Queues, but that didn't quite seem to solve this issue either. Any suggestions?
The documentation (9.0.2 Search Reference)  describes a function ipmask(<mask>,<ip>) that is supposed to apply the given netmask to the given IP.  Seems pretty simple, and the examples are mostly str... See more...
The documentation (9.0.2 Search Reference)  describes a function ipmask(<mask>,<ip>) that is supposed to apply the given netmask to the given IP.  Seems pretty simple, and the examples are mostly straightforward... unless you consider what a netmask of 0.255.0.244 would actually mean on the network. The more interesting problem is what you're allowed to pass to this function.  From what I can tell, the first parameter MUST be a quoted string of digits, and particularly NOT the name of a field in your data:     |makeresults 1 | eval ip = "1.2.3.4", mask = "255.255.255.0"       With these values defined, | eval k = ipmask("255.255.255.0", "5.6.7.8") works fine, k=5.6.7.0 | eval k = ipmask("255.255.255.0", ip) works fine, k=1.2.3.0 | eval k = ipmask("255.255.255.0", mask) works fine, k=255.255.255.0 (but isn't a meaningful calculation). | eval k = ipmask(mask, "5.6.7.8") does not work: Error in 'EvalCommand': The arguments to the 'ipmask' function are invalid. | eval k = ipmask(mask, ip) does not work: Error in 'EvalCommand': The arguments to the 'ipmask' function are invalid. I'm sure there's some highly technical reason why the SPL parser does not handle this fairly common case, and if there's anyone who can share that reason, I'd love to hear it. --Joe
HI, We are analyzing the Splunk product to get the Assets ( Hosts, servers, network devices etc) , Asset information like name, network interface and associated vulnerability details. We need to fetc... See more...
HI, We are analyzing the Splunk product to get the Assets ( Hosts, servers, network devices etc) , Asset information like name, network interface and associated vulnerability details. We need to fetch these details with REST API. We explored all the REST API, we didn’t succeed, kindly let us know the possible ways to get Asset and associated vulnerability details using API.
Hi, looking for guidance please on how to alert on recurring auth events over multiple time spans, but I can't get my head around how to construct the search terms for example: - user account i... See more...
Hi, looking for guidance please on how to alert on recurring auth events over multiple time spans, but I can't get my head around how to construct the search terms for example: - user account is locked out =>X per hour for =>X consecutive hours - user has =>X login failures in an hour for  =>X consecutive  hours  Hope that makes sense! TIA
I looking for someone help on this I am struggling with parsing the logs when pool was down and and send alert 5 minutes time period. Dec 8 05:18:39 servername notice mcpd[7794]: 01070727:5: Pool ... See more...
I looking for someone help on this I am struggling with parsing the logs when pool was down and and send alert 5 minutes time period. Dec 8 05:18:39 servername notice mcpd[7794]: 01070727:5: Pool /FGID/server_443_pool member /FGID/load-balancer:443 monitor status up. [ /Common/tcp: up ] [ was down for 0hr:0min:16sec ] host = servernamesource = /srv/syslog/f5-logs/f5/servername/2022120805 12/8/22 5:18:39.000 AM Dec 8 05:18:39 servername notice mcpd[7794]: 01070727:5: Pool /FGID/server-pool member /FGID/load-balancer:443 monitor status up. [ /Common/tcp: up ] [ was down for 0hr:0min:16sec ] host = servernamesource = /srv/syslog/f5-logs/f5/servername/2022120805 12/8/22 5:18:38.000 AM Dec 8 05:18:38 servername notice mcpd[6928]: 01070727:5: Pool /FGID/server_443_pool member /FGID/load-balancer:443 monitor status up. [ /Common/tcp: up ] [ was down for 0hr:0min:15sec ] host = servernamesource = /srv/syslog/f5-logs/f5/servername/2022120805
Hi Team  I want to get the percentile 90 of the response time in splunk. Suppose I have the below response times. What is the query with which I can get the percentile 90 in Splunk   1.379 1.2... See more...
Hi Team  I want to get the percentile 90 of the response time in splunk. Suppose I have the below response times. What is the query with which I can get the percentile 90 in Splunk   1.379 1.276 1.351 2.062 1.465 3.107 1.621 1.901 1.562 27.203     Please help on the same
Hi All, We are getting XML logs in our Splunk but from investigation perspective it's very hard for us to read the data . Is there a way we can parse it?
Hello Team  I have been trying to search for this information for a while now and haven't been successful yet. Our plan is to deploy Appdynamics as a SAAS based deployment for testing purposes and... See more...
Hello Team  I have been trying to search for this information for a while now and haven't been successful yet. Our plan is to deploy Appdynamics as a SAAS based deployment for testing purposes and test ServiceNow integration with Appdynamics and last would be Monitoring applications and VM's that are deployed on the AWS cloud.  Initially, we thought of going with the IBL APM Enterprise licenses but will that be sufficient enough. Will we be needing extra licenses to monitor applications and VM's deployed at the cloud.  We are planning to deploy 3 ESXI hosts and deploy some applications and start the monitoring from there and next step would be integrate Service now with Appdynamics, and then Monitoring applications and VM's that are deployed on the AWS cloud. If anyone can please clarify what and how many licenses should be considered here, that would be superhelpful. 
Hi, Can anyone please help me with the requirement? Can you please build an overview dashboard that displays how many change requests, how many incidents, and how many tasks are submitted, and co... See more...
Hi, Can anyone please help me with the requirement? Can you please build an overview dashboard that displays how many change requests, how many incidents, and how many tasks are submitted, and completed each day? FYI: these changes, incidents, and tasks are from ServiceNow. I work on ServiceNow but want to expand my knowledge of Splunk.  Regards Suman P.
Hi, i need help on writing the [http] stanza in inputs.conf  for HEC token configuration. Please assist. Thank you.
Hello Splunk Lovers! i have date format 202211131614220000 and i want convert this format to readble for Splunk i should use strptime and strftime, but i have some problems. Please, give me prompt
Im trying to get the following into a table and have a count of the successful attempts. I have tried a few ways, but still am lost. below are my 2 attempts: Ex1: (index="sfdc" sourcetype="sfdc... See more...
Im trying to get the following into a table and have a count of the successful attempts. I have tried a few ways, but still am lost. below are my 2 attempts: Ex1: (index="sfdc" sourcetype="sfdc:loginhistory" eventtype="sfdc_login_history" LoginType="SAML Sfdc Initiated SSO" app="sfdc" action=success)  OR (index="microsoft" (sourcetype="azure:aad:signin" eventtype="azure_aad_signin" app="windows:sign:in" action=success) OR (sourcetype="azure:aad:user" jobTitle!=null)) | eval login=case((sourcetype=="azure:aad:signin" AND eventtype=="azure_aad_signin"), "windows", (sourcetype=="sfdc:loginhistory" AND app="sfdc"), "salesforce") | table displayName  mail jobTitle officeLocation eventtype Ex2: index=microsoft (sourcetype="azure:aad:user" givenName="***" surname="***" jobTitle!="null" officeLocation!="null") OR (sourcetype="azure:aad:signin" eventtype="azure_aad_signin" app="windows:sign:in" action=success)    | stats  count by displayName  mail jobTitle officeLocation     | rename  displayName AS "Display Name" mail AS Email department AS Department jobTitle AS "Job Title" officeLocation AS Branch    | fields  - count     | sort  + Display Name EX3: index=microsoft (sourcetype="azure:aad:signin" eventtype="azure_aad_signin" app="windows:sign:in" action=success) OR (sourcetype="azure:aad:user" givenName="***" surname="***" jobTitle!="null" officeLocation!="null") | eval joiner=if(sourcetype="azure:aad:signin", action, displayName) | stats values(action) as action by displayName mail jobTitle officeLocation | rename displayName AS "Display Name" mail AS Email department AS Department jobTitle AS "Job Title" officeLocation AS Branch | sort + Display Name What I'm trying to achieve here is to have a table listing the following Display Name | Email | Job Title | Branch | Windows Logon Attempt* | Sales Force Login Attempt* Windows Logon Attempt* | Sales Force Login Attempt* - is the part that I get stuck and can't seem to populate the list from the following index and srctype. Ex2 and 3 is without Salesforce (which I can live with ). If you can help he with Ext3 that will be great! Any ideas from the Splunkers in here? Thanks, S
Hello Greetings! i have data in the following way Device   Processor  status 01             Splunkd        Running 01               Sql                 Stopped 01                Python        S... See more...
Hello Greetings! i have data in the following way Device   Processor  status 01             Splunkd        Running 01               Sql                 Stopped 01                Python        Stopped 02          Spluknd.          Stopped In the above output for a device if state is running of atleast on processor need to consider it as online otherwise offline can you please help me with query.     Thank you in advance Happy splunking!
Hi, We have configured a input to connect to redshift database from splunk db connect. It was working fine. But suddenly the input is showing invalid database connection in the inputs page and we ar... See more...
Hi, We have configured a input to connect to redshift database from splunk db connect. It was working fine. But suddenly the input is showing invalid database connection in the inputs page and we are unable to create a new connection. Telnet to the db is connecting. What could be the possible issues for this?
Hi, My customer have configured Splunk to get the data in from "GitHub audit log stream" with Http Event Collector installed in their DMZ Server(with 8088 port open to the outside internet), Which ... See more...
Hi, My customer have configured Splunk to get the data in from "GitHub audit log stream" with Http Event Collector installed in their DMZ Server(with 8088 port open to the outside internet), Which forwards the data to another Splunk server within their secure server with only 9997, 8000 and 8088 port opened. But, in order to open 8088 port from DMZ Server, they have to complete their Security Vulnerability Check.  The problem is that the check returned with various security vulnerabilities, and that prevents them to open the port. the vulnerabilities returned is as below. phpPgAdmin redirect.php URL redirection Spring Boot Actuator endpoint exposed Missing "Content-Security-Policy" header Sensitive Authentication (Basic) Information Leakage Missing HttpOnly attribute in session cookie Cookies with insecure, incorrect or missing SameSite attributes Discover compressed directories Unnecessary Http response headers were found in the application Include sensitive session information in persistent cookies Discovery of web application source code exposure patterns host header injection Are there any security vulnerability check reports done by Splunk? or some way to solve this vulnerability? Thank you in advance.  
in the raw event there is a line that goes Brand\="xyz"   What's the rex command I can use to extract this in my search?   If possible, I'd like to remove the \ and "" from the extraction its... See more...
in the raw event there is a line that goes Brand\="xyz"   What's the rex command I can use to extract this in my search?   If possible, I'd like to remove the \ and "" from the extraction itself.
Hi, does anyone has experience with website monitoring app  I am facing issue with adding inputs, especially if input (check) requires HTTP Authentication. error is : " 401 Splunk cannot authenti... See more...
Hi, does anyone has experience with website monitoring app  I am facing issue with adding inputs, especially if input (check) requires HTTP Authentication. error is : " 401 Splunk cannot authenticate the request. CSRF validation failed "     Request URL: https://xxxx:8443/en-US/splunkd/__raw/services/storage/passwords?output_mode=json Request Method: POST Status Code: 401 Splunk cannot authenticate the request. CSRF validation failed. Remote Address: 10.217.11.78:8443 Referrer Policy: no-referrer     I find out that request is missing one header parameter X-Splunk-Form-Key requestURL: en-US/splunkd/__raw/services/storage/passwords?output_mode=json request header:   Accept: text/javascript, text/html, application/xml, text/xml, */* Accept-Encoding: gzip, deflate, br Accept-Language: en-GB,en-US;q=0.9,en;q=0.8,sk;q=0.7 Connection: keep-alive Content-Length: 61 Content-Type: application/x-www-form-urlencoded; charset=UTF-8 Cookie: mintjs%3Auuid=02ced06b-7ec3-40e2-8e0b-91040e343001; built_by_tabuilder=yes; ta_builder_current_ta_name=TA-splunk-backup; ta_builder_current_ta_display_name=Splunk%20backup; splunkweb_csrf_token_8443=1505950XXXXXXXXXXX; session_id_8443=6e995a2d52b3a34ade550aafff50XXXXXXXXXXX; splunkd_8443=OUucWpZKKsQtgnedQ98lJ5VRCosW7HAdUh6fia3B^Q^D9HofK5tn11AwTAEiKXhzUL_HPsAiG91v8evtXcVri9MYUmXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX0fCIm84az_izL Host: xxxx:8443 Origin: https://xxxx:8443 sec-ch-ua: "Not?A_Brand";v="8", "Chromium";v="108", "Google Chrome";v="108" sec-ch-ua-mobile: ?0 sec-ch-ua-platform: "Windows" Sec-Fetch-Dest: empty Sec-Fetch-Mode: cors Sec-Fetch-Site: same-origin User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36 X-Requested-With: XMLHttpRequest     Response header :    Connection: Keep-Alive Content-Length: 104 Content-Type: application/json; charset=UTF-8 Date: Thu, 08 Dec 2022 23:06:45 GMT Server: Splunkd Vary: Cookie X-Content-Type-Options: nosniff X-Frame-Options: SAMEORIGIN     Any idea why is this parameter missing?  Splunk runs on linux  I tried : clear cache, incognito window,
I want to strip certain results by time from my search. I eventually plen to place a dedup command between the first and second searches, however I am running into issues with the earliest and latest... See more...
I want to strip certain results by time from my search. I eventually plen to place a dedup command between the first and second searches, however I am running into issues with the earliest and latest modifiers on search in the second search. The following 3 searches work fine and return results throughout the week:     host=x host=x earliest=-7d host=x earliest=-7d | search *     But these searches return no results: even when there are events in the listed time frame.     host=x | search host=x earliest=-7d host=x | search host=x earliest=-4d     Does anyone have any idea why? I would like to strip off search results based on time in the second search but it doesn't seem to work.