All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@PrewinThomas , I removed the transaction command. Lets make it simple. I need a table like this which plots the blocked numbers of emails, firewalls, DLP, EDR, Web proxy and WAF for last 6 months ... See more...
@PrewinThomas , I removed the transaction command. Lets make it simple. I need a table like this which plots the blocked numbers of emails, firewalls, DLP, EDR, Web proxy and WAF for last 6 months showing the count for each month and its total similar to this. What I did was modified each query to give data for last 6months for each parameter and I then simply append that to one table which is not a good practice. Hence I am here asking help from the experts. I can share the individual queries if that helps- Email - | tstats summariesonly=false dc(Message_Log.msg.header.message-id) as Blocked from datamodel=pps_ondemand where (Message_Log.filter.routeDirection="inbound") AND (Message_Log.filter.disposition="discard" OR Message_Log.filter.disposition="reject" OR Message_Log.filter.quarantine.folder="Spam*") earliest=-6mon@mon latest=now by _time  -- here from the datamodel pps, I am simply counting the Spam, inbound and discard emails DLP- index=forcepoint_dlp sourcetype IN ("forcepoint:dlp","forcepoint:dlp:csv") action="blocked" earliest=-6mon@mon latest=now | bin _time span=1mon | stats count(action) as Blocked by _time Web Proxy- index=zscaler* action=blocked sourcetype="zscalernss-web" earliest=-6mon@mon latest=now | bin _time span=1mon | stats count as Blocked by _time EDR- index=crowdstrike-hc sourcetype="CrowdStrike:Event:Streams:JSON" "metadata.eventType"=DetectionSummaryEvent metadata.customerIDString=* earliest=-6mon@mon latest=now | bin _time span=1mon | search action=blocked NOT action=allowed | stats dc(event.DetectId) as Blocked by _time WAF- tstats `security_content_summariesonly` count as Blocked from datamodel=Web where sourcetype IN ("alertlogic:waf","aemcdn","aws:*","azure:firewall:*") AND Web.action="block" earliest=-6mon@mon latest=now by _time   -- web is an accelerated datamodel in my environment `security_content_summariesonly` expands to summariesonly=false allow_old_summaries=true fillnull_value=null lastly, Firewall-  | tstats `security_content_summariesonly` count as Blocked from datamodel=Network_Traffic where sourcetype IN ("cp_log", "cisco:asa", "pan:traffic") AND All_Traffic.action="blocked" earliest=-6mon@mon latest=now by _time     
Hi @dmcnulty  The captain is refusing the sync request because the member doesn't have a valid baseline, and the subsequent resync attempt failed because a required snapshot file is missing or inacc... See more...
Hi @dmcnulty  The captain is refusing the sync request because the member doesn't have a valid baseline, and the subsequent resync attempt failed because a required snapshot file is missing or inaccessible. The recommended action is to perform a destructive configuration resync on the affected member (SH3). This forces the member to discard its current replicated configuration and pull a fresh copy from the captain. Run the following command on the affected search head member (SH3): splunk resync shcluster-replicated-config --answer-yes This command will discard the contents of $SPLUNK_HOME/etc/shcluster/apps and $SPLUNK_HOME/etc/shcluster/local on SH3 and attempt to fetch a complete, fresh copy from the captain. Ensure the captain (SH2) is healthy and has sufficient disk space and resources before running this command. If the destructive resync fails with the same or a similar error about a missing snapshot file, it might indicate a more severe issue with the captain's snapshot or the member's ability to process the bundle. If it fails then check the captain's splunkd.log for any specific errors around replication bundles. If the issue persists, removing the member from the cluster and re-adding it is the standard, albeit more disruptive, next step.    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
After running out of disk space on a search head (part of a cluster), now fixed and all SH's rebooted. I get this error: ConfReplicationException Error pulling configurations from the search head c... See more...
After running out of disk space on a search head (part of a cluster), now fixed and all SH's rebooted. I get this error: ConfReplicationException Error pulling configurations from the search head cluster captain (SH2:8089); Error in fetchFrom, at=: Non-200 status_code=500: refuse request without valid baseline; snapshot exists at op_id=xxxx6e8e for repo=SH2:8089". Search head cluster member (SH3:8089) is having trouble pulling configs from the captain (SH2:8089). xxxxx Consider performing a destructive configuration resync on this search head cluster member.   Ran "splunk resync shcluster-replicated-config"  and get this: ConfReplicationException : Error downloading snapshot: Non-200 status_code=400: Error opening snapshot_file' /opt/splunk/var/run/snapshot/174xxxxxxxx82aca.bundle: No such file or directory.    In the snapshot folder there is nothing, sometimes a few files, they don't match the other search heads. 'splunk show bundle-replication-status'  is all green and the same as the other 2 SH's.   Is there a force resync switch?  Really can't remove this SH and run 'clean all'.   Thank you!    
@PickleRick , summariesonly=false allow_old_summaries=true fillnull_value=null expands to summariesonly=false allow_old_summaries=true fillnull_value=null I re-arranged the parameters a bit and ... See more...
@PickleRick , summariesonly=false allow_old_summaries=true fillnull_value=null expands to summariesonly=false allow_old_summaries=true fillnull_value=null I re-arranged the parameters a bit and now it seems to be loading in around 6 mins now. I need to optimize it. Here the expanded view of the query- | tstats summariesonly=false dc(Message_Log.msg.header.message-id) as Blocked from datamodel=pps_ondemand where (Message_Log.filter.routeDirection="inbound") AND (Message_Log.filter.disposition="discard" OR Message_Log.filter.disposition="reject" OR Message_Log.filter.quarantine.folder="Spam*") earliest=-6mon@mon latest=now by _time | eval Source="Email" | append [ search index=forcepoint_dlp sourcetype IN ("forcepoint:dlp","forcepoint:dlp:csv") action="blocked" earliest=-6mon@mon latest=now | bin _time span=1mon | stats count(action) as Blocked by _time | eval Source="DLP"] | append [ search index=zscaler* action=blocked sourcetype="zscalernss-web" earliest=-6mon@mon latest=now | bin _time span=1mon | stats count as Blocked by _time | eval Source="Web Proxy"] | append [ search index=crowdstrike-hc sourcetype="CrowdStrike:Event:Streams:JSON" "metadata.eventType"=DetectionSummaryEvent metadata.customerIDString=* earliest=-6mon@mon latest=now | bin _time span=1mon | transaction "event.DetectId" | search action=blocked NOT action=allowed | stats dc(event.DetectId) as Blocked by _time | eval Source="EDR"] | append [| tstats summariesonly=false allow_old_summaries=true fillnull_value=null count as Blocked from datamodel=Web where sourcetype IN ("alertlogic:waf","aemcdn","aws:*","azure:firewall:*") AND Web.action="block" earliest=-6mon latest=now by _time | eval Source="WAF"] | append [| tstats summariesonly=false allow_old_summaries=true fillnull_value=null count as Blocked from datamodel=Network_Traffic where sourcetype IN ("cp_log", "cisco:asa", "pan:traffic") AND All_Traffic.action="blocked" earliest=-6mon@mon latest=now by _time | eval Source="Firewall"] | eval MonthNum=strftime(_time, "%Y-%m"), MonthName=strftime(_time, "%b") | stats sum(Blocked) as Blocked by Source MonthNum MonthName | xyseries Source MonthName Blocked | addinfo | table Source Dec Jan Feb Mar Apr May Jun   The goal is to get a table like this -  
Good to know, thank you! I'll start working on this and we'll see how it goes.
Forget you ever heard about Search Filters.  They usually cause more problems than they solve. TRANSFORMS are index-time operations so they will mask data for everyone. What you want is Field Filte... See more...
Forget you ever heard about Search Filters.  They usually cause more problems than they solve. TRANSFORMS are index-time operations so they will mask data for everyone. What you want is Field Filters.  They automatically mask fields in search results based on user roles.  See https://docs.splunk.com/Documentation/Splunk/9.4.2/Security/searchfieldfilters for more information.
No license is needed for a standalone server that only searches thawed data since there is no ingest.
Hello folks, We use Splunk cloud platform (managed by Splunk) for our logging system. We want to implement role based search filtering to mask JWT tokens and Emails in the logs for certain users. E... See more...
Hello folks, We use Splunk cloud platform (managed by Splunk) for our logging system. We want to implement role based search filtering to mask JWT tokens and Emails in the logs for certain users. Ex.  Roles: User, RestrictedUser Both roles have access to the same index: main Users can query as normal, but if a RestrictedUser searches the logs then they should get the logs with the token and email data masked. Documentation/community posts/gemini recommended adding regex for filtering in transforms conf and updating some other conf files like so # transforms.conf [redact_jwt_searchtime] REGEX = (token=([A-Za-z0-9-]+\.[A-Za-z0-9-]+\.[A-Za-z0-9-_]+)) FORMAT = token=xxx.xxx.xxx SOURCE_KEY = _raw [redact_email_searchtime] REGEX = ([A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}) FORMAT = xxx@xxx.xxx SOURCE_KEY = _raw # props.conf [*] TRANSFORMS-redact_for_search = redact_jwt_searchtime, redact_email_searchtime # authorize.conf [test_masked_data] srchFilter = search_filters = redact_for_search creating an app and uploading it on the cloud platform. Since the platform is managed by Splunk, I'm not sure if that would be sufficient or even work.   Anyone have suggestions on the best way to apply the role based search filters when on Splunk Cloud rather than on premise?
Hello all   Is the Nutanix TA (version 2.5.0) compatible with Splunk 9.3.4+? It is listed as such on the splunk base (https://splunkbase.splunk.com/app/3103) but when I attempted to upgrade Unable ... See more...
Hello all   Is the Nutanix TA (version 2.5.0) compatible with Splunk 9.3.4+? It is listed as such on the splunk base (https://splunkbase.splunk.com/app/3103) but when I attempted to upgrade Unable to initialize modular input "abc" defined in the app "TA-nutanix": Introspecting scheme=nutanix_health: script running failed (PID 2607550 exited with code 1)..  
I don't see a Splunkbase add-on for Airtable. Is this a private app for Splunk Enterprise? It would be nice to have something that works for Splunk Cloud.
Thanks for the replies. I will clarify. Management wants me to test thawing old data so it is searchable (near term) or can be moved to cloud possibly later this year. DDSS and DDAA will be part of ... See more...
Thanks for the replies. I will clarify. Management wants me to test thawing old data so it is searchable (near term) or can be moved to cloud possibly later this year. DDSS and DDAA will be part of the discussion a bit down the road, but for now I need to test/verify thawing from frozen. We are going to retire our on-prem infrastructure at some point. The thawed data does not have to be to our production cluster, so a standalone splunk single server would work. If I stand up a new single instance server, is there any licensing I need to worry about if I'm just using it to thaw frozen data?
It's not clear how this relates to cloud migrations. If you sign up for Splunk Cloud's Dynamic Data Self Storage (DDSS) service, then data archived in the cloud is the same as data archived on-prem.... See more...
It's not clear how this relates to cloud migrations. If you sign up for Splunk Cloud's Dynamic Data Self Storage (DDSS) service, then data archived in the cloud is the same as data archived on-prem.  You must thaw the data then stand up indexers to process it. If you sign up for Splunk Cloud's Dynamic Data Active Archive (DDAA) service, then you use the GUI to tell Splunk what data to restore for you and it becomes searchable for a limited time (30 days, IIRC).  External data cannot be added to DDAA. Either way, there's no need to migrate currently-frozen data to the cloud.
The HOST field worked! Thanks @new_splunker . I'm trying to ingest a webhook into the Splunk cloud trial instance. I'm getting a SSL certification error - failed to verify the legitimacy of the serv... See more...
The HOST field worked! Thanks @new_splunker . I'm trying to ingest a webhook into the Splunk cloud trial instance. I'm getting a SSL certification error - failed to verify the legitimacy of the server and therefore could not establish a secure connection to it were there any other setting you did to establish the webhook connection?
Ah okay, I'm sorry Im not too familiar with the app, but hopefully someone else on here might have experience with it. Have you seen the "Details" tab on https://splunkbase.splunk.com/app/5365 which ... See more...
Ah okay, I'm sorry Im not too familiar with the app, but hopefully someone else on here might have experience with it. Have you seen the "Details" tab on https://splunkbase.splunk.com/app/5365 which has some setup instructions?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I have installed this one but i've not been able to get it working. I'm using the same proxy as with the Splunk Add-on for Microsoft Office 365 and  I've put in an incorrect secret key but i don't ge... See more...
I have installed this one but i've not been able to get it working. I'm using the same proxy as with the Splunk Add-on for Microsoft Office 365 and  I've put in an incorrect secret key but i don't get any kind of error like i do with the Splunk Add-on for Microsoft Office 365.
Hi @mikefg  I take it you just need to thaw the data so it can be copied to your Splunk Cloud instance? Is PS doing this work? If so they might have a preference as to where this data is or how its ... See more...
Hi @mikefg  I take it you just need to thaw the data so it can be copied to your Splunk Cloud instance? Is PS doing this work? If so they might have a preference as to where this data is or how its accessed as part of the wider migration piece (there may be other bits of info I'm unaware of) (e.g. is this an Online Smartstore migration, or a Data Copy?) However - personally (and without knowing what I dont know!) I would go with creating an instance connected to your old storage array, you actually only need a standalone Splunk instance to thaw out data and if you are not needing to do searches against this until its moved to Splunk Cloud then you shouldnt need to scale it out too much - unless you really have a lot to thaw out. Once it is thawed it will be in a format which can be used with existing processes for migrating to Splunk Cloud.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi I have created a playbook and am trying to run it from an event. But the playbook does not populate when I click on run playbook. what is that I am doing wrong?
Hi @anlePRH  Are you already producing the table you shared in your original post, or is that what you are wanting to get to? You should be able to use the following after your REX: | stats list(S... See more...
Hi @anlePRH  Are you already producing the table you shared in your original post, or is that what you are wanting to get to? You should be able to use the following after your REX: | stats list(SourceIP) as IPs, count as Count by Subnet  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I have created a playbook and am trying to run it from an event I have configured. but when I click on the run playbook my playbook does not show in the list what is that I am missing?
Hi @vishalduttauk  Have you seen  Microsoft O365 Email Add-on for Splunk? The description of this include "The Microsoft® O365® Email Add-on for Splunk® ingests O365 emails via Microsoft’s Graph API... See more...
Hi @vishalduttauk  Have you seen  Microsoft O365 Email Add-on for Splunk? The description of this include "The Microsoft® O365® Email Add-on for Splunk® ingests O365 emails via Microsoft’s Graph API." so I think this might give you the email content that you need! Check it out and let me know if you need any further help!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing