All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have created a calculated field which parses _time from a date stamp in the data. However, it does not set _time correctly. If I set the calculated field to something different it's fine. So, was... See more...
I have created a calculated field which parses _time from a date stamp in the data. However, it does not set _time correctly. If I set the calculated field to something different it's fine. So, was just wondering if there was any documentation anywhere that talks about being able to override _time with a calculated field. NB: I can't set the event _time at ingestion to be the correct date from the data as I am ingesting a complete data set every day, where historical results may change, so I'm just using a 24h search and then changing _time.  
Does Splunk support enabling WORM on SmartStore S3 buckets ?
I created an input_type (data input type) to collect data from external REST API using Splunk Add-on Builder app.  How do I delete it?
Looking for the web link to all the Splunk + ES Confs of the past, their lectures & contents posted. Thanks a million in advance.
I have a csv file containing the SAM accounts of 1200 AD groups and I need to find out the proper search query to find the last date of their modification or change.
I am sure I am sure I am missing something easy but, for some reason, when I compare these two values (they are in string format from my data) the comparison isn't correct. I don't seem to have this ... See more...
I am sure I am sure I am missing something easy but, for some reason, when I compare these two values (they are in string format from my data) the comparison isn't correct. I don't seem to have this issue when I use this EVAL statement with other versions I have, but for some reason, this comparison just throws it out of whack:       | eval Status=case("21.3.0.44"<"5.5.5.0","Declining","21.3.0.44"="5.5.5.0","Mainstream","21.3.0.44">"5.5.5.0","Emerging")     The result just shows "21.3.0.44" as always less-than. Please advise if I am missing some caveat I am not aware of. P.S. I tried converting to these to numbers but due to all the decimals in version numbers, the number isn't valid. I suppose I could replace the decimals somehow but thought I would ask first before I try going down this route. Thanks in Advance!
Our ITSI is showing some "Detected Anomaly" for the kpi "Index Usage".   Where and how can I find the notable events for those "Detected Anomaly"? I didn't find then in index=itsi_tracked_aler... See more...
Our ITSI is showing some "Detected Anomaly" for the kpi "Index Usage".   Where and how can I find the notable events for those "Detected Anomaly"? I didn't find then in index=itsi_tracked_alerts. Thanks
 Hello there, Sorry for the bad translation. Some time ago I installed a plugin called "Splunk Secure Gateway" during installation and configuration it was necessary to create a new role with the s... See more...
 Hello there, Sorry for the bad translation. Some time ago I installed a plugin called "Splunk Secure Gateway" during installation and configuration it was necessary to create a new role with the same name "securegateway" Today I am trying to enter a new user and I get the error message "In handler 'users': Could not get info for role that does not exist: securegateway" The funny thing is that the user that I am trying to create I am only assigning the role "user"  
Hi all,   We are looking for possibilities of Monitoring DB and JVM on Splunk Cloud as we have the following issues:   DB Connect APP – Data security compliance issue, ruled-out JMX APP –... See more...
Hi all,   We are looking for possibilities of Monitoring DB and JVM on Splunk Cloud as we have the following issues:   DB Connect APP – Data security compliance issue, ruled-out JMX APP – this APP supports Java version 8 & above. But, we need to enable monitors for Java version 6 & 7  If you have any documentation or any thing on other ways of Monitoring please provide the same   Thanks in Advance,    
Hi i hope you will be fine.i need your help.i want splunk forwarder only take alert data from logs?how i can tell the splunk forwarder which is called Splunk universal forwarder to take only alert da... See more...
Hi i hope you will be fine.i need your help.i want splunk forwarder only take alert data from logs?how i can tell the splunk forwarder which is called Splunk universal forwarder to take only alert data from logs.let say i have 5000 files of logs ,in which only 1000 files are Alert logs i want only 1000 logs files from splunk forwarder,help me for this issue,i hope you will help me. Thanks in advance
Hello All, We have a mixed environment where some UFs point to our on-prem Heavy Forwarders while others point to Splunk cloud indexers. I would like to update all UFs to point to Splunk cloud but... See more...
Hello All, We have a mixed environment where some UFs point to our on-prem Heavy Forwarders while others point to Splunk cloud indexers. I would like to update all UFs to point to Splunk cloud but have some questions. Notes - (1) we also have an on-prem deployment server and as a test (2) installed UF on my Mac as it is fwd'ing logs to Splunk Cloud. * What's the best way to update the old UF config to the new? In other words, can someone point me to resources that explain how to best use the deployment server to do this? * Will I lose transformations to logs that point to the HF? Thanks in advance
Currently the app I'm working on generates log events in the following (simplified/obfuscated) format before they are ingested into Splunk: 2021-09-24 19:00:00.016 +00:00 [Warning] Something.Somet... See more...
Currently the app I'm working on generates log events in the following (simplified/obfuscated) format before they are ingested into Splunk: 2021-09-24 19:00:00.016 +00:00 [Warning] Something.SomethingElse.YetAnotherThing: jsonData={ "alice": "Alison", "bob": "Bobby", "group" : {"joe": "Joseph", "jane": "Janet"}} The only bits of those log events which are important are the timestamp at the leftmost end, and the well-formed json data after the equal sign.  What I wish was possible is to change the event being created by the application to be only a well-formed JSON object that included the timestamp.  In other words something like this.... {"_time":"2021-09-24 19:00:00.016 +00:00", "alice": "Alison", "bob": "Bobby", "thing" : {"joe": "Joseph", "jane": "Janet"}} But that is it's own challenge (outside of Splunk) which will take me time to make happen.  In the meantime I wonder if there is something I could setup in Splunk so that, at ingestion time, the original log event was transformed into that latter format.  This would save me from having to do rex & rename commands like this as part of each an every splunk query I want to run.  Which is not only annoying, but I'm guessing slows down the queries as well. host="something" | rex "jsonData=(?<jsonData>.+})" | rename jsonData as _raw | spath | search event="*Exception*"  Is this possible?  Furthermore is this possible given that the events are ingested from Azure via the plugin: Splunk Add-on for Microsoft Cloud Services?
I have requirement to split the single cell into two columns, in which i need to add different search result data. I need a table view like this, and 2 will be the value without duplicates, 5 ca... See more...
I have requirement to split the single cell into two columns, in which i need to add different search result data. I need a table view like this, and 2 will be the value without duplicates, 5 can be the value with duplicates. need to make a dashboard like this table 
Hello. I am trying to show our splunk instance inside another one of our application's webpages, so it can have our styling applied to splunk. (basically putting our company sidebar on the left, then... See more...
Hello. I am trying to show our splunk instance inside another one of our application's webpages, so it can have our styling applied to splunk. (basically putting our company sidebar on the left, then with the rest of the page rendering our splunk application).  Initially had the problem that it wouldn't load because of the same-origin header, so I was able to resolve that. However, now when I try to login through that iframe, I get a 400 error. The only information returned by that error is "{"status":2}". It also says server error under the login. Any ideas what this error might be, and why it may be happening?  
I need to collect Specific Splunk data for business analysis.  My target URL is https://splunk.usce.l.az.fisv.cloud/en-US/app/epayments/postpayee_success_and_failure?form.SponsorId=*&form.Subscriber... See more...
I need to collect Specific Splunk data for business analysis.  My target URL is https://splunk.usce.l.az.fisv.cloud/en-US/app/epayments/postpayee_success_and_failure?form.SponsorId=*&form.SubscriberId=*&form.CorrelationId=*&form.Status=*&form.Exception=-&form.timespan.earliest=-7d%40h&form.timespan.latest=now.  After login with my username/password, it will show "Post Payee Exception List".    I am trying to write a Python script to read Splunk data in last 7 days.  Below is my code: session = requests.Session() response = session.post(LOGIN_URL, auth = HTTPBasicAuth(user, password), verify=False) print(response.status_code) The user/password are the same ones for Web access and the LOGIN_URL is 'https://splunk.usce.l.az.fisv.cloud/en-US/account/login?return_to=%2Fen-US%2F'  However, the response status code is 401 which is a failure.  What's the correct Python way to login to Splunk website? In addition, I am trying to connect to Splunk server with Splunk-SDK package via port 8089.  Below is my Python code: import splunklib.client as client import splunklib.results as results HOST = "splunk.usce.l.az.fisv.cloud" PORT = 8089 credentials = get_splunk_pwd() username = credentials['username'] password = credentials['password'] service = client.connect(     host=HOST,     port=PORT,     username=username,     password=password) print(service)   rr = results.ResultsReader(service.jobs.export("search index=_internal earliest=-24h | head 5")) for result in rr:     if isinstance(result, results.Message):         # Diagnostic messages might be returned in the results         print('%s: %s' % (result.type, result.message) )     elif isinstance(result, dict):         # Normal events are returned as dicts         print(result)   Below is the output.  It looks like the Splunk connection is established successfully.  But the serarch is invalid.  What's the valid search string based on my target URL in 1st line?   <splunklib.client.Service object at 0x0000029461421790> DEBUG: Configuration initialization for /opt/splunk/etc took 91ms when dispatching a search (search ID: 1632765670.57370_31B6A7A0-BF6B-46EF-BD46-2CF0D6AB351A) DEBUG: Invalid eval expression for 'EVAL-SessionDateTime' in stanza [source::dbmon-tail://*/CCAuditLogSelect]: The expression is malformed. An unexpected character is reached at '“%Y-%m-%d %H:%M:%S.%3N”)'. DEBUG: Invalid eval expression for 'EVAL-TrxDateTime' in stanza [source::dbmon-tail://*/CCAuditLogSelect]: The expression is malformed. An unexpected character is reached at '“%Y-%m-%d %H:%M:%S.%3N”)'. DEBUG: base lispy: [ AND index::_internal ] DEBUG: search context: user="xzhang", app="search", bs-pathname="/opt/splunk/etc"  
I have Splunk Ent. + ES & want to upgrade to 8.2.2. Thank u very much.
Hi everyone. I try to explain you. For example: I can detect when a user has been connected form a X country, in this moment splunk send me a email, and I need to starting a investigation to det... See more...
Hi everyone. I try to explain you. For example: I can detect when a user has been connected form a X country, in this moment splunk send me a email, and I need to starting a investigation to determinate if that event is known. My question is: How can i configure a second search when that alert has been trigged? I would like Splunk run a search whit the information that it got from the alert. I wouldn’t like to modify my original search because if a add this second search in my firs search My performance could be affected,   thanks and regards.
Hi All, Any advice on how to go about finding coverage gaps in a typical ES installation ?We r ingesting logs from AWS, and On prem servers both.  Is there any document or tool that i can use to f... See more...
Hi All, Any advice on how to go about finding coverage gaps in a typical ES installation ?We r ingesting logs from AWS, and On prem servers both.  Is there any document or tool that i can use to find out whats missing , whats covered and overall gap analysis ? Also, Can someone pls point me to typical/ important Dashboards that we can leverage for every day security tasks, other than default out of the box ones ?     
I have the following search.  index=main_index sourcetype="hec:google" operationName=createMobileAuthenticationOutcome direction=response source="customers-mobile-authentications-v1" | dedup correl... See more...
I have the following search.  index=main_index sourcetype="hec:google" operationName=createMobileAuthenticationOutcome direction=response source="customers-mobile-authentications-v1" | dedup correlationId | stats count by eventData.log{}.downstreamRequestAdditionalLog.request.applicationEntryType eventData.log{}.downstreamRequestAdditionalLog.request.authenticationOutcome | sort -count | eventstats sum(count) as tot by eventData.log{}.downstreamRequestAdditionalLog.request.applicationEntryType | eval perc = round(count/tot*100,1) | stats list(eventData.log{}.downstreamRequestAdditionalLog.request.authenticationOutcome) as "OutCome Request Details" list(count) as Count, list(perc) as "% per Auth Type" by eventData.log{}.downstreamRequestAdditionalLog.request.applicationEntryType | appendpipe [ stats sum(Count) as Count, sum("% per Auth Type") as "% per Auth Type" by eventData.log{}.downstreamRequestAdditionalLog.request.applicationEntryType | eval "OutCome Request Details" = "Total Request"] | sort eventData.log{}.downstreamRequestAdditionalLog.request.applicationEntryType | rename eventData.log{}.downstreamRequestAdditionalLog.request.applicationEntryType as "Auth Type" which shows this table. But....  What I want to do is add another row for the total number of directOpenApp.Completed and pushNotifications.Completed I've tried addtotals and can't get my head around appendcols. Thanks r
Hi All, My question is regarding ES.  Can someone pls point me to typical/ important Dashboards that we can leverage for every day security tasks, other than default out of the box ones ?   We r i... See more...
Hi All, My question is regarding ES.  Can someone pls point me to typical/ important Dashboards that we can leverage for every day security tasks, other than default out of the box ones ?   We r ingesting logs from AWS, and On prem servers both.