All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@inventsekar This works only for the ip address 1.2.3.4. What do I do if the ip address changes to 5.6.7.8 or 4.3.2.1? 
After upgrade 9.2.0. splunk app for AWS  got the same error "A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details.".  Unable to open dashboard... See more...
After upgrade 9.2.0. splunk app for AWS  got the same error "A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details.".  Unable to open dashboard.  While seeing the  developer console, below error shwoing. How to resume back the dashboards of Splunk App for AWS.  Current version of Splunk App AWS 6.0.2.      
After upgrade 9.2.0. splunk app for AWS  got the same error "A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details.".  Unable to open dashboard... See more...
After upgrade 9.2.0. splunk app for AWS  got the same error "A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details.".  Unable to open dashboard.  While seeing the  developer console, below error shwoing. How to bring it back.     
I want a scheduled task to run the query and save it twice a day, every day.
Hi @m92, you can schedule the runs of your alert twice in a day using cron: 0 12,19 * * * the question is: do you want the same time period (e.g. 24 hours) on bothe the searches? Ciao. Giuseppe
Thanks! This help me to move forward, just one thing if you can help. I have all done all, just not sure on what should i be putting on html (https://dev.splunk.com/enterprise/docs/devtools/customale... See more...
Thanks! This help me to move forward, just one thing if you can help. I have all done all, just not sure on what should i be putting on html (https://dev.splunk.com/enterprise/docs/devtools/customalertactions/createuicaa/) so that i can pass the IP to Akamai API.
Hello Splunkers, I'd like to schedule a query twice a day. For example, one at 12:00 PM and the other at 7:00 PM, and then receive a report of each query. This would save me from having to run the q... See more...
Hello Splunkers, I'd like to schedule a query twice a day. For example, one at 12:00 PM and the other at 7:00 PM, and then receive a report of each query. This would save me from having to run the query each time manually. Is it possible, and if so, how can I do it? The query in question is: (index="index1" Users=* IP=*) OR (index="index2" tag=1) | where NOT match(Users, "^AAA-[0-9]{5}\$") | where NOT match(Users, "^AAA[A-Z0-9]{10}\$") | eval ip=coalesce(IP, srcip) | stats dc(index) AS index_count values(Users) AS Users values(destip) AS destip values(service) AS service earliest(_time) AS earliest latest(_time) AS latest BY ip | where index_count>1 | eval earliest=strftime(earliest,"%Y-%m-%d %H:%M:%S"), latest=strftime(latest,"%Y-%m-%d %H:%M:%S") | table Users, ip, dest_ip, service, earliest, latest Thanks in advance!
Hello, This is the configuration that we have in the search head TA props.conf [ sourcetype] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8 TIME_FORMAT=%Y-%m-%dT... See more...
Hello, This is the configuration that we have in the search head TA props.conf [ sourcetype] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8 TIME_FORMAT=%Y-%m-%dT%H:%M:%SZ TIME_PREFIX=eventTime\\\"\:\\\" EVENT_BREAKER=([\r\n]+) TRUNCATE=0 MAX_TIMESTAMP_LOOKAHEAD=30 EVENT_BREAKER_ENABLE=true KV_MODE=json
Hello, Thanks for your reply. We already have tested putting this in the props.conf of our search head TA, but this also did not extract the event fields further. Reg the splunkbase TA, I am not su... See more...
Hello, Thanks for your reply. We already have tested putting this in the props.conf of our search head TA, but this also did not extract the event fields further. Reg the splunkbase TA, I am not sure on this. May be I can give it a check.
Apologies I have pasted the log below and just changed the words, hopefully this is easier to work with? The log file starts at "Software Version....." and always ends with the below line at the bot... See more...
Apologies I have pasted the log below and just changed the words, hopefully this is easier to work with? The log file starts at "Software Version....." and always ends with the below line at the bottom of the log "software Completed at 10/05/2024 09:00:06 local time"  Software Version 7.0.1890.0 on server.server.net Entry 6828 starting at 10/05/2024 09:00:01 Starting via software on CustomerDomain ------------------------------------------------------------ Software Version 7.0.1890.0 on sql002 Entry 6828 starting at 10/05/2024 09:00:01 Submitted by software Autosubmit at 10/05/2024 08:00:04 Executing as company\account Starting via software on CustomerDomain Process ID XXXXX ------------------------------------------------------------ Activity: Preparing modules for first use. Current Operation: Status Description: Name Used (GB) Free (GB) Provider Root CurrentLocation ---- --------- --------- -------- ---- --------------- JD software company.company.net 2024-05-10T09:00:05.000Z | INFO | ba9992e7-1681-49b9-b984-711c34f89f4c | SQL002 | file| ICOMcheckfilearrival | Checking for arrival of new file 2024-05-10T09:00:06.000Z | INFO | ba9992e7-1681-49b9-b984-711c34f89f4c | SQL002 | file | ICOMcheckfilearrival | New File has been received. 2024-05-10T09:00:06.000Z | INFO | ba9992e7-1681-49b9-b984-711c34f89f4c | SQL002 | file | ICOMcheckfilearrival | Sync File has been received.   ------------------------------------------------------------ Job Completed at: 10/05/2024 09:00:06 Elapsed Time: 00:00:04.2499362 Kernel mode CPU Time: 00:00:00.5468750 User mode CPU Time: 00:00:00.9531250 Read operation count: 2185 Write operation count: 73 Other operation count: 15510 Read byte count: 5156432 Write byte count: 1688 Other byte count: 205934 Total page faults: 36072 Total process count: 0 Peak process memory: 78073856 Peak job memory: 85004288 ------------------------------------------------------------ ------------------------------------------------------------ Final Status Code: 0, Severity: Success Final Status: The operation completed successfully ------------------------------------------------------------ software Completed at 10/05/2024 09:00:06 local time
Hello, Thanks for your response. I have tried your suggestion on the search head but unfortunately it did not extract the "event" field further.  
Hi Everyone, If I lower the index retention and tell it to use the archive, what happens to the logs with larger retention? Example. We currently have 1 year of retention. If we move to 6 months of ... See more...
Hi Everyone, If I lower the index retention and tell it to use the archive, what happens to the logs with larger retention? Example. We currently have 1 year of retention. If we move to 6 months of retention + 18 of archiving, what happens to logs older than 6 months? 
| eval Status=case(priority="ERROR" AND tracePoint="EXCEPTION" OR message="*Error while processing*","ERROR", priority="WARN","WARN",priority!="ERROR" AND tracePoint!="EXCEPTION" OR message!="*(ERROR... See more...
| eval Status=case(priority="ERROR" AND tracePoint="EXCEPTION" OR message="*Error while processing*","ERROR", priority="WARN","WARN",priority!="ERROR" AND tracePoint!="EXCEPTION" OR message!="*(ERROR):*","SUCCESS") |stats values(Status) as Status by transactionId | eval Status=mvindex(Status, 0)
Take a look at the asset and identity framework documentation https://docs.splunk.com/Documentation/ES/7.3.1/Admin/Addassetandidentitydata Priorities can be assigned through the searches you write ... See more...
Take a look at the asset and identity framework documentation https://docs.splunk.com/Documentation/ES/7.3.1/Admin/Addassetandidentitydata Priorities can be assigned through the searches you write to pull in A&I data or can be derived from network subnets. Typically you may write searches to pull in data from sources and assign priorities based on criteria, such as whether the asset is a production asset, or the identity is a senior manager or a system administrator. This can be based on their job title or group membership.  
Hello, Splunkers! I am learning Splunk ES and trying to understand how urgency value is assigned for notables generated from the correlation searches. I went over this article: How urgency is assi... See more...
Hello, Splunkers! I am learning Splunk ES and trying to understand how urgency value is assigned for notables generated from the correlation searches. I went over this article: How urgency is assigned to notable events in Splunk Enterprise Security - Splunk Documentation  . So, if severity is assigned in the settings of the correlation search, where do we assign the priority to assets? Can someone please explain or provide a documentation page of how this process (assigning priority) is done exactly? Specifically, I would really appreciate if someone could share, where should this be configured, whether on Enterprise Security itself, or elsewhere, is it done through GUI, or it requires manually editing some config files.    Also, a bit stupid question, but, can we also assign priority to identities, for example to indicate higher priority for admin accounts rather than usual accounts.    Thank you for taking your time reading and replying to my post
Maybe you can clarify the use case more?  For example, how do data and model enter Splunk?  Assuming that data are in one set of ingested events (and that your model is about time series), are the pr... See more...
Maybe you can clarify the use case more?  For example, how do data and model enter Splunk?  Assuming that data are in one set of ingested events (and that your model is about time series), are the predictions also in some ingested events?  Or are predictions in some sort of data table?  Or is the model a prescribed mathematical formula from which Splunk is expected to calculate predictions? R2 is nothing but mathematics.  Splunk is not bad at math.  But no, Splunk doesn't have built-in function or command for this. Another possible route is Splunk Machine Learning Tool Kit.  Even though your problem is perhaps not machine learning, mathematics are similar enough.
I obtained an AppDynamics account to install the on-premises AppDynamics platform on a trial basis. However, when I search for Downloads in “AppDynamics and Observability Platform”, the Platform ... See more...
I obtained an AppDynamics account to install the on-premises AppDynamics platform on a trial basis. However, when I search for Downloads in “AppDynamics and Observability Platform”, the Platform is not displayed. I forget what steps I took to register for an account, but maybe it’s because I created an account using a SaaS trial license. Is it possible to install the on-premises AppDynamics platform from this state? Is there no other way but to recreate the account?
Hi @matheusvortex , you could write the results of the two searches in one summary index (called e.g. Notables), adding in each alert all the fields you need and then execute the third alert on the ... See more...
Hi @matheusvortex , you could write the results of the two searches in one summary index (called e.g. Notables), adding in each alert all the fields you need and then execute the third alert on the summary index displaying the fields you need. Ciao. Giuseppe This is the approach of Enterprise Security.
Hi @vineela, have you always the backslashes in your logs? if yes, you should consider them in the regex: in regex101.com https://regex101.com/r/7Fq96D/1 errorCode\s*\=\s*\\\"(?<errorCode>[^\\]+)... See more...
Hi @vineela, have you always the backslashes in your logs? if yes, you should consider them in the regex: in regex101.com https://regex101.com/r/7Fq96D/1 errorCode\s*\=\s*\\\"(?<errorCode>[^\\]+)   but in Splunk you must try: | rex "errorCode\s*\=\s*\\\\\"(?<errorCode>[^\\]+)" Ciao. Giuseppe
Because something is wrong. That's the short and useless answer for a badly asked question. For something more constructive - click on that red exclamation mark and see which checks are failing.