All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @toporagno , as @richgalloway said, in Security Essentials App and in ES Content Updates App, there are many samples of the brute force attack followed by a saccessful login, anyway, you could tt... See more...
Hi @toporagno , as @richgalloway said, in Security Essentials App and in ES Content Updates App, there are many samples of the brute force attack followed by a saccessful login, anyway, you could ttry something like this: | tstats summariesonly=true count(eval(Authentication.action="success")) AS success_count count(eval(Authentication.action="failure")) AS failure_count FROM datamodel=Authentication WHERE Authentication.action IN (success, failure) BY Authentication.user | rename Authentication.user AS user | where failure_count>=6 AND success_count>=6 That you can adapt to your data. Ciao. Giuseppe
Hi @toporagno , let me understand: you have an eventtyle like "index=my_index src=10.0.0.1" that tags these events with a tag like "MY_TAG", you have a search that uses the above src and it runs,... See more...
Hi @toporagno , let me understand: you have an eventtyle like "index=my_index src=10.0.0.1" that tags these events with a tag like "MY_TAG", you have a search that uses the above src and it runs, you want to replace the condition "src=10.0.0.1" with the condition tag=MY_TAG, and it doesn't run, is it correct? If this is your use case, as also @ITWhisperer asked, the first thing is sharing the search and the eventtype associated to the tag. Then, tag is the only field case sensitive, are you sure that the tag value is correct? Second check: did you tried to replace the condition "src=10.0.0.1" with the eventtype associated to the tag? maybe the two conditions aren't compatible. Ciao. Giuseppe
Hi @vishwa, You can use below regex; ([A-Z]+)\:\s+(.+?)\s+  
If you want to run SplunkForwarder with virtual account (which is recommended if you want to follow princpile of the least privileges) there is also a way to enable reading of sysmon logs. NT SERVICE... See more...
If you want to run SplunkForwarder with virtual account (which is recommended if you want to follow princpile of the least privileges) there is also a way to enable reading of sysmon logs. NT SERVICE/SplunkFowarder needs to be added to Event Log Readers group. One of the ways is to add it to the Group policy and deploy it accross your environment where your Forwarders are installed.  
It's working.  I just added my second forwarder.  Thanks again!
The search did not work. Looks like it is may be cause by the data index. I checked the monitoring console and did a health check. showing License warning and violations. Category showing data indexi... See more...
The search did not work. Looks like it is may be cause by the data index. I checked the monitoring console and did a health check. showing License warning and violations. Category showing data indexing.
I used Splunk Add on for AWS to send log files stored in S3 to SQS using S3 event notifications, and configured Splunk to read the log files from SQS.   However, I got an error saying that the ... See more...
I used Splunk Add on for AWS to send log files stored in S3 to SQS using S3 event notifications, and configured Splunk to read the log files from SQS.   However, I got an error saying that the S3 test message that is always sent first by S3 event notifications could not be parsed.   Splunk on EC2 is given KMS decryption privileges as shown below.   "Sid": "VisualEditor1", "Effect": "Allow", "Action": [ "sqs:*", "s3:*", "kms:Decrypt" ], "Resource": [ "arn:aws:sqs:ap-northeast-1:*************:poc-splunk-vpcflowlog*", "arn:aws:s3:::poc-splunk-vpcflowlog", "arn:aws:s3:::poc-splunk-vpcflowlog/*"     What could be the cause?
This will show you your ingest usage by sourcetype   index=_internal source=/opt/splunk/var/log/splunk/license_usage.log type=Usage | timechart limit=40 sum(b) as data by st   Look at the monito... See more...
This will show you your ingest usage by sourcetype   index=_internal source=/opt/splunk/var/log/splunk/license_usage.log type=Usage | timechart limit=40 sum(b) as data by st   Look at the monitoring console, that will also give you information on your sourcetypes/index ingestion
That looks like it is more than 128 characters into the event, so you should set up MAX_TIMESTAMP_LOOKAHEAD and optionally TIME_PREFIX for your sourcetype for that data.  
My code was an example using your data - you are using that fixed set of strings in your code - you should do the rex against your raw data not the fixed msgs field - remove the eval msgs.... and the... See more...
My code was an example using your data - you are using that fixed set of strings in your code - you should do the rex against your raw data not the fixed msgs field - remove the eval msgs.... and the mvexpand, that was just example code. Your rex statement should either use _raw or if you have those messages extracted to a separate field, use that field.
Hello! We keep going over our license usage. We cant seem to find what is causing us to go over. we've gone over 3 times now. Any suggestion on how to find what is causing this, please?
Hi @bowesmana, As you suggested We tried below query, but i am getting same values for each msgs strings. Can you please let me know is my query correct?? index=app-index source=application.logs... See more...
Hi @bowesmana, As you suggested We tried below query, but i am getting same values for each msgs strings. Can you please let me know is my query correct?? index=app-index source=application.logs |rex field= _raw "RampData :\s(?<RampdataSet>\w+)" | eval msgs=split("Initial message received with below details,Letter published correctley to ATM subject,Letter published correctley to DMM subject,Letter rejected due to: DOUBLE_KEY,Letter rejected due to: UNVALID_LOG,Letter rejected due to: UNVALID_DATA_APP",",") | mvexpand msgs | rex field=msgs "(Initial message |Letter published correctley to |Letter rejected due to: )(?<reason>.*)" |chart count over RampdataSet by reason |addtotals OUTPUT: Rails below details ATM subject DMM subject DOUBLE_KEY UNVALID_LOG UNVALID_DATA_APP Total WAC 0 0 0 0 0 0 0 WAX 15 15 15 15 15 15 90 WAM 20 20 20 20 20 20 120 STC 12 12 12 12 12 12 72 STX 30 30 30 30 30 30 180 OTP 10 10 10 10 10 10 60 TTC 5 5 5 5 5 5 30 TAN 7 7 7 7 7 7 42 TXN 10 10 10 10 10 10 60 WOU 12 12 12 12 12 12 72  
We are updating docs to reflect layering of multiple http stanzas with different queueSize values. Eventually all tokens share one input queue httpInputQ. Once all tokens are read in-memory the firs... See more...
We are updating docs to reflect layering of multiple http stanzas with different queueSize values. Eventually all tokens share one input queue httpInputQ. Once all tokens are read in-memory the first token(shorted in ascending order) wins and creates final httpInputQ. Other queueSize values are no-op since the queue is already created. Above is also applicable for multiple splunktcpin or tcpin ports having different queueSize but sharing splunktcp queue or tcpin queue.
@marnall is correct.  I don't believe that Dashboard Studio supports checkbox but aside from look and feel, multiselect provides all functionality that checkbox does. You can also ask in this dedicat... See more...
@marnall is correct.  I don't believe that Dashboard Studio supports checkbox but aside from look and feel, multiselect provides all functionality that checkbox does. You can also ask in this dedicated forum Dashboards & Visualizations.
Have you compared emulation with real data?  Also, really get rid of that table command which can be in the way. (You can add some formatting after you verify that outputs are satisfactory.)  Is ther... See more...
Have you compared emulation with real data?  Also, really get rid of that table command which can be in the way. (You can add some formatting after you verify that outputs are satisfactory.)  Is there some real data that you can share? (Anonymize as needed but take care to preserve precise structure.)  Using emulation, the output is not zero.  Clearly, actual data is different from what you posted above. Run this:   | makeresults | eval _raw = "{\"date\": \"1/2/2022 00:12:22,124\", \"DATA\": \"[http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-5454656] MIM: [483748348-632637f-38648266257d] FLOW: [NEW] { SERVICE: AAP | Applicationid: iis-675456 | ACTION: START | REQ: GET data published/data/ui } DADTA -:TIME:<TIMESTAMP> (0) 1712721546785 to 1712721546885 ms GET /v8/wi/data/*, GET data/ui/wi/load/success\", \"tags\": {\"host\": \"GTU5656\", \"insuranceid\": \"8786578896667\", \"lib\": \"app\"}}" | spath | eval _time = strptime(date, "%d/%m/%Y %H:%M:%S,%f") ``` the above emulates index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") ``` | rex field=DATA mode=sed "s/ *[\|}\]]/\"/g s/: *\[*/=\"/g" | rename DATA AS _raw | kv |search ACTION= start OR ACTION=done NOT SERVICE="null" |eval split=SERVICE.":".ACTION |timechart span=1d count by split |eval _time=strftime(_time, "%d/%m/%Y") | table _time *START *DONE   Do you get the same results as I did in the previous comment? (I do not encourage use of screenshot to show search or results, but I had already shared them in text previously. So, here you go for a screenshot.)
Hello, good day team! How are you? I did the download and instalation for this app but I can't found the "TA genesys cloud", where can I download it? The TA lives in another repository? Please, co... See more...
Hello, good day team! How are you? I did the download and instalation for this app but I can't found the "TA genesys cloud", where can I download it? The TA lives in another repository? Please, could you help me to get this TA please? If currently the TA doesn't lives in the Splunkbase, could you send me the TA via email please? Regards in advance! Carlos Martínez. carloshugo.martinez@edenred.com Edenred.
What happens if you use the v2 jobs endpoint? (the non-v2 one is deprecated, as per https://docs.splunk.com/Documentation/Splunk/9.2.1/RESTREF/RESTsearch) Instead of: url = "https://abc.splunkcloud... See more...
What happens if you use the v2 jobs endpoint? (the non-v2 one is deprecated, as per https://docs.splunk.com/Documentation/Splunk/9.2.1/RESTREF/RESTsearch) Instead of: url = "https://abc.splunkcloud.com:8089/servicesAB/-/xyz/search/jobs" Try: url = "https://abc.splunkcloud.com:8089/servicesAB/-/xyz/search/v2/jobs"  
From searching the Postman public collections, there seems to be a Splunk Postman collection, built using the public tutorialdata.zip file. https://www.postman.com/njrusmc/workspace/public-collection... See more...
From searching the Postman public collections, there seems to be a Splunk Postman collection, built using the public tutorialdata.zip file. https://www.postman.com/njrusmc/workspace/public-collections/collection/14123647-1408e0a3-c0bb-4e08-83b1-8f83fdc8f1c0?tab=overview
Unless I am mistaken, the multiselect input should provide the checkbox functionality you desire. Does multiselect work for your use case?
These lookups get their information from configured asset lookups within Enterprise Security, as you linked. They're populated automatically  Do your asset lookups have CIDR information included in t... See more...
These lookups get their information from configured asset lookups within Enterprise Security, as you linked. They're populated automatically  Do your asset lookups have CIDR information included in them? If the string lookup is populating, then you have some kind of assets configured. If you have a dev environment experiencing this problem, you might try enabling the demo_asset_lookup in the Asset and Identity Management page to see if it populates the CIDR one automatically. It has CIDR networks properly built into it. The best official documentation I came across in my search was https://docs.splunk.com/Documentation/ES/7.3.1/Admin/Howassetandidentitydataprocessed