All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

its not Splunk Cloud, but Splunk enterprise 9.3
Hello all, implementing some routing at the moment in order to forward a subset of data to a third party syslog system. However, i'm running into issues with the Windows Logs. They look like this at ... See more...
Hello all, implementing some routing at the moment in order to forward a subset of data to a third party syslog system. However, i'm running into issues with the Windows Logs. They look like this at syslog-NG  Dec 29 07:47:18 12/29/2014 02:47:17 AM Dec 29 07:47:18 LogName=Security Dec 29 07:47:18 SourceName=Microsoft Windows security auditing. Dec 29 07:47:18 EventCode=4689 Dec 29 07:47:18 EventType=0   I believe this is because of the /r/n in the Windows events caused by non-xml  How can i get the Splunk Heavy Forwarder to treat each Windows event as one line and then send it through?  Architecture = UF - HF - Third Party System/Splunk Cloud  Thanks 
I am currently working on creating an alert for a possible MFA fatigue attack from our Entra ID sign in logs. The logic would be to find sign in events where a user received x number of MFA requests ... See more...
I am currently working on creating an alert for a possible MFA fatigue attack from our Entra ID sign in logs. The logic would be to find sign in events where a user received x number of MFA requests within a given timeframe, denied them all and then on the 5th one for example they approved the MFA request for our SOC to investigate. I have some of the logic for this written out below, but I am struggling to figure out how to add the last piece in of an approved MFA request after the x number of denied MFA attempts by the same user. Has anyone had any luck creating this and if so, how did you go about it? Any help is greatly appreciated. Thank you! index=cloud_entraid category=SignInLogs operationName="Sign-in activity" properties.status.errorCode=500121 properties.status.additionalDetails="MFA denied; user declined the authentication" | rename properties.* as * | bucket span=10m _time | stats count min(_time) as firstTime max(_time) as lastTime by user, status.additionalDetails, appDisplayName, user_agent | where count > 4 | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`
Similar issue. There are no error logs per say.  The search log shows the the output appears to be happening on the remote SH.  Results written to file '/opt/splunk/etc/apps/search/lookups/mylooku... See more...
Similar issue. There are no error logs per say.  The search log shows the the output appears to be happening on the remote SH.  Results written to file '/opt/splunk/etc/apps/search/lookups/mylookup.csv' on serverName=',<<remoteServerName>> In other words, if I login to my local search head and run this and get an output of 100 entries: | federated from:my report | outputlookup mylookup.csv Then I run this (Again on the local search head), it will be empty: | inputlookup mylookup.csv    
Hello adrifesa95. Are you using the Splunk Add-on for Check Point Log Exporter, or the older Splunk Add-on for Check Point OPSEC LEA? If the newer one, there is a section on the docs referring to tro... See more...
Hello adrifesa95. Are you using the Splunk Add-on for Check Point Log Exporter, or the older Splunk Add-on for Check Point OPSEC LEA? If the newer one, there is a section on the docs referring to troubleshooting when its not parsing due to depth limit and how to increase it... https://docs.splunk.com/Documentation/AddOns/released/CheckPointLogExporter/Troubleshoot
Thanks so much for your attention. your feedback really means a lot to me. I totally agree that there are different ways to reach the same goal. I’ll definitely try to use your suggestions, but hone... See more...
Thanks so much for your attention. your feedback really means a lot to me. I totally agree that there are different ways to reach the same goal. I’ll definitely try to use your suggestions, but honestly, if I were to implement everything you mentioned, it would pretty much turn into a whole new project with a different approach. Using Python was a great idea, but for some reason, I just didn’t end up using it! Let me explain a bit about some of the points you brought up. The main thing that made the code a bit complicated is all the logging that’s happening. I needed to log every single event in the project, and the reason I used process IDs was to track everything from start to finish. Since the code is open source, anyone can tweak it to fit their needs. The task might seem simple (deleting frozen buckets based on a limit), but as you know, once you start working on a project, you run into all sorts of issues. Writing this took me a few weeks, and without ChatGPT, it would’ve taken even longer. I’ve mentioned in the Readme that I got some help from ChatGPT. For hardcoding some paths, your idea is a good one, and I’m hoping someone will contribute that to the project. Lastly, I tested this script on 40TB of frozen data with a daily log volume of 5TB, and at least for me, there weren’t any performance issues. Deleting directly (from shell) was just as fast as using the script. I hope you get a chance to test it out and let me know how it goes. I’d be really happy to use your feedback to improve the project even more.
| fieldsummary | search values=*\"value\":\"<what value you exactly want to check>\"* | table field
You can infer from the search itself which fields you need present. You need "dest" and "country" fields in the sse_host_to_country lookup and "user" and "countries" fields in the gdpr_user_category ... See more...
You can infer from the search itself which fields you need present. You need "dest" and "country" fields in the sse_host_to_country lookup and "user" and "countries" fields in the gdpr_user_category lookup (and the "countries" field can contain multiple values separated with the pipe character).
Could you please give me Format that lookup ? 
Hello FelixL, I have the same problem as you. Did you find out why it happened and how to fix it? If you change the locale in the url, it will sometimes start working. For example, en-US to en-GB
Me and other colleagues are not able to access Splunk Support Portal for days, receiving a 404 error. We have tried different links: https://splunk.my.site.com/customer/s/ https://splunk.my.site.c... See more...
Me and other colleagues are not able to access Splunk Support Portal for days, receiving a 404 error. We have tried different links: https://splunk.my.site.com/customer/s/ https://splunk.my.site.com/partner/s/ But non of them are working. This means we cannot access Entitlements or open and manage Cases. Is anyone having the same problem?
Have you been able to access? We are still having this problem and cannot access entitlements nor open cases. Splunk seems not to be aware of this problem at all after contacting them.
Hi Team, We are trying to install - Auto Update MaxMind Database into our splunk https://splunkbase.splunk.com/app/5482   --> This is the splunk app   We have the account id and the license ke... See more...
Hi Team, We are trying to install - Auto Update MaxMind Database into our splunk https://splunkbase.splunk.com/app/5482   --> This is the splunk app   We have the account id and the license key While testing this by running command - | maxminddbupdate  We got below error  HTTPSConnectionPool(host='download.maxmind.com', port=443): Max retries exceeded with url: /geoip/databases/GeoLite2-City/download?suffix=tar.gz (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1106)')))
These are lookups you should have defined based on your own environment (probably populated by user/asset management). The idea here is that you want to find if someone from - for example - US branch... See more...
These are lookups you should have defined based on your own environment (probably populated by user/asset management). The idea here is that you want to find if someone from - for example - US branch of your company doesn't log to Germany-based servers. And how anyone except you should know which hosts are in Germany and which users work in US?
The easiest way to tackle this would be to remove the "renamed" app and use the one from the Splunkbase. You can also remove the one from the Splunkbase and change the app id in your renamed app so ... See more...
The easiest way to tackle this would be to remove the "renamed" app and use the one from the Splunkbase. You can also remove the one from the Splunkbase and change the app id in your renamed app so that it does not get updated (but then you're stuck with the one you have). Why would you want to rename the app in the first place? If you want to overwrite in-app settings you have the local directory.
Move the filldown to before the calculations (Splunk is not Excel (or other spreadsheet applications) - the calculations are not dynamic formulae held in cells!)
hi @ITWhisperer , it works but my other columns which are a calculation of that column don't get populated | eval distributor_to_abc_latency = catchup_unix_time - CIMsendingTime_unix sinc... See more...
hi @ITWhisperer , it works but my other columns which are a calculation of that column don't get populated | eval distributor_to_abc_latency = catchup_unix_time - CIMsendingTime_unix since the column was empty and was fillled using filldown the other columns dont get filled
Hi Everyone, Good Afternoon. We recently rename the add-on. After renaming we are facing the below issues : * After upgrading we are able to see two addon. One with old name and one with new name... See more...
Hi Everyone, Good Afternoon. We recently rename the add-on. After renaming we are facing the below issues : * After upgrading we are able to see two addon. One with old name and one with new name but ideally after upgrading only the latest addon should be there. * Inputs of old addon are not migrating to new addon. We replicated the APPID of old addon with new addon but it did not work. If anyone face the issue ,please suggest to resolve the problem. Thanks,
@vasudevahebri , I would advice you to check your client secrets and make sure it is valid and not expired.