All Posts

Top

All Posts

Hi everyone! My goal is to create an alert to monitor in ALL saved search if there's any email that no longer exist (mainly, colleagues that left the company or similar). My idea was to search for ... See more...
Hi everyone! My goal is to create an alert to monitor in ALL saved search if there's any email that no longer exist (mainly, colleagues that left the company or similar). My idea was to search for the same patter of Mail Delivery Subsystem that happens when sending an email from Gmail (or any other) to a non-existing mail. Bud didn't find anything in _internal index, nor with a rest to saved search and index=mail is empty. Amy idea?
It is splunk enterprise not cloud. Which will allow kv_mode=json to do its thing. you never should go straight to indexed_extractions=json.   Where I need to give kv_mode? We have syslog servers w... See more...
It is splunk enterprise not cloud. Which will allow kv_mode=json to do its thing. you never should go straight to indexed_extractions=json.   Where I need to give kv_mode? We have syslog servers where uf installed and we have DS which pushes apps to Deployer and manager. From there it will push to peer nodes and search heads. Where I can exactly this attribute (kv_mode=json)? We have props and transforms in manager apps where it will be pushed to all peers. I don't see any props in search heads.
Thank you.  I think you just confirmed I would need to write code. It is strange that AppD does have this information (it can split on ThreadId/ThreadName), but it won't let you see this information... See more...
Thank you.  I think you just confirmed I would need to write code. It is strange that AppD does have this information (it can split on ThreadId/ThreadName), but it won't let you see this information within business transactions screens.
Hello Hello! I'm trying to match the values from a lookup file, in this case being Amazon CIDRS values against ip-adresses that are dynamically retrieved from events, but I can't get it to work, the... See more...
Hello Hello! I'm trying to match the values from a lookup file, in this case being Amazon CIDRS values against ip-adresses that are dynamically retrieved from events, but I can't get it to work, the following is a snippet of what I have.     | append [| inputlookup cidr_aws.csv ] | foreach CIDR [ eval matched_ip = if(cidrmatch(<<FIELD>>, ip_address), ip_address, null()) ] | search matched_ip!=null | table matched_ip, CIDR      There is nothing outputted from this, and if I remove the "| search matched_ip!=null" then I can see that the IP appears which means that it failed the "cidrmatch" comparison and after some experimenting I figured out that the entire thing works If I hardcode either the "<<FIELD>>" value or "ip_address" like the following two examples..     | append [| inputlookup cidr_aws.csv ] | foreach CIDR [ eval matched_ip = if(cidrmatch("3.248.0.0/13", ip_address), ip_address, null()) ] | search matched_ip!=null | table matched_ip, CIDR, Country    or   | append [| inputlookup cidr_aws.csv ] | foreach CIDR [ eval matched_ip = if(cidrmatch(<<FIELD>>, "3.248.163.69"), ip_address, null()) ] | search matched_ip!=null | table matched_ip, CIDR, Country   but this is not optimal since it's supposed to be dynamic.   Does anybody know how to solve this?
If they want to parse JSON automatically, the sender agent/mechanism must send full formed JSON events.  Review the event with them...its not JSON. its JSON in an unstructured log line. In fact, thi... See more...
If they want to parse JSON automatically, the sender agent/mechanism must send full formed JSON events.  Review the event with them...its not JSON. its JSON in an unstructured log line. In fact, this looks like some json thru syslog adventure. yum!  <12>Nov 12 20:15:12 localhost whatever: data={"a":"b","c":"d"} The easiest way in syslog is to send kvpairs in the log events instead of json, like foo=bar bar=baz,   <12>Nov 12 20:15:12 localhost whatever: a=b c=d   then splunk can just pick out all the kv pairs automagically, instead of having to parse json to do the same thing. Many apps have this option in their logger.  might get lucky. JSON provides no value here if we have to live with whatever pipeline is sending this syslog filled with json stuff.  If the app cant change its format, or the ingestion path cant be reviewed, then the next option is surgery on the inbound event, where Splunk config is used to parse out the syslog facility, timestamp (which doesnt even have the year or precision timestamp) the host into indexed fields, then remove this part of the event:   <12>Nov 12 20:15:12 localhost whatever: data=   so all thats left when splunk indexes the _raw event is:   {"a":"b","c":"d"}   Which will allow kv_mode=json to do its thing. you never should go straight to indexed_extractions=json.  See this awesome conf talk on the power of splunk ingest_eval  https://conf.splunk.com/files/2020/slides/PLA1154C.pdf Then these examples on github from the con talk  https://github.com/silkyrich/ingest_eval_examples/blob/master/default/props.conf https://github.com/silkyrich/ingest_eval_examples/blob/master/default/transforms.conf or look into splunk edge processor or ingest processor if you are a cloud customer.  Options after that, are reviewing the ingestion process and move away from syslog to more modern collection to get better data like iso timestamps with timezone, etc. but whatever you use, still needs to be able to format the event properly if you want the benefit of structured data format.  I strongly suggest you consult with your Splunk Sales Engineer on the customer's account so that an expert or partner can help them achieve this and you can learn by working with them.  Is this a onprem enterprise user? or Cloud user?
Usually there are events for every 30s or something like this. Probably your environment has some hickups or it’s stalled for some reason? It’s hard to say more without more information about your set... See more...
Usually there are events for every 30s or something like this. Probably your environment has some hickups or it’s stalled for some reason? It’s hard to say more without more information about your setups and os, hw knowledge.
Maintenance mode is one parameter in server.conf file. So when you copy it to target it will be there. Then just disable maintenance mode and it will removed from server.conf. If you change both nam... See more...
Maintenance mode is one parameter in server.conf file. So when you copy it to target it will be there. Then just disable maintenance mode and it will removed from server.conf. If you change both name and ip there could be issues as all peers and SHs are using name or ip to recognize the cluster! I’m not 100% sure if the peers is actually recognized by GUID, but I almost proposed you to do offline not online migration and you need change this to all peers before start them. Also same for other components/nodes.
Thanks. It looks like you migrate the IP but keep the DNS names. We'll be moving both. If we issue maintenance mode on the old Cluster Manager, then migrate, how would we ensure the maintenance mode ... See more...
Thanks. It looks like you migrate the IP but keep the DNS names. We'll be moving both. If we issue maintenance mode on the old Cluster Manager, then migrate, how would we ensure the maintenance mode is lifted after moving to the new one? 
Hi @mattymo , Thanks for your detailed explanation. What format would be good to get json data extracted automatically onto Splunk? I can suggest the sender to follow that format if possible. And i... See more...
Hi @mattymo , Thanks for your detailed explanation. What format would be good to get json data extracted automatically onto Splunk? I can suggest the sender to follow that format if possible. And is there will be any problem if they remove that unwanted matter like date time?? And they want all json fields values to be extracted not specific and it would be difficult to write regex for all of them.
@rohithvr19  The python file should be in bin folder of your app. Can you please confirm whether the individual script is working fine? Have you tried my shared app on your local machine? If my ap... See more...
@rohithvr19  The python file should be in bin folder of your app. Can you please confirm whether the individual script is working fine? Have you tried my shared app on your local machine? If my app is working fine then try to add your Python code to this app.    Thanks KV
You probably have looked this https://dev.splunk.com/enterprise/tutorials/module_setuppages/plansetup ? One place where you could look help is CIM app. Another one is TA for *nix where are modified ... See more...
You probably have looked this https://dev.splunk.com/enterprise/tutorials/module_setuppages/plansetup ? One place where you could look help is CIM app. Another one is TA for *nix where are modified those inputs values on setup screen.
Hi, Looking at the activity of the Splunkd threads on the indexers, I've seen in the monitoring console that sometimes there is no activity for a period of 1 minute. Is this normal? evidence  ... See more...
Hi, Looking at the activity of the Splunkd threads on the indexers, I've seen in the monitoring console that sometimes there is no activity for a period of 1 minute. Is this normal? evidence   Regards, thank you very much  
@Eldemallawy  1. Try this (gives the amount of license used for indexes) index=_internal sourcetype=splunkd source=*license_usage.log type=Usage | stats sum(b) as bytes by idx | eval mb=round(b... See more...
@Eldemallawy  1. Try this (gives the amount of license used for indexes) index=_internal sourcetype=splunkd source=*license_usage.log type=Usage | stats sum(b) as bytes by idx | eval mb=round(bytes/1024/1024,3) If you want overall, then you can use this timechart version index=_internal sourcetype=splunkd source=*license_usage.log type=Usage | timechart span=1d sum(b) as usage_mb | eval usage_mb=round(usage_mb/1024/1024,3) For per index, you can use this index=_internal sourcetype=splunkd source=*license_usage.log type=Usage | bucket span=1d _time | stats sum(b) as bytes by _time idx | eval mb=round(bytes/1024/1024,3) 2. Setup a Monitoring Console:- https://docs.splunk.com/Documentation/Splunk/latest/DMC/DMCoverview 
Hey,  So I have a playbook that fetches multiple files and adds them to the soar vault. I can then send each individual file to Jira by specifying the files vault_id in the update_ticket action on t... See more...
Hey,  So I have a playbook that fetches multiple files and adds them to the soar vault. I can then send each individual file to Jira by specifying the files vault_id in the update_ticket action on the Jira app. Ideally I would like to send only one file over to Jira, an archive containing each of the other files. I can create a file and add it to the archive after seeing this post - https://community.splunk.com/t5/Splunk-SOAR/SOAR-Create-File-from-Artifacts/m-p/581662 However, I don't know how I could take each individual file from the vault and add it to this archive before I sent it over. Any help would be appreciated! Thanks
I am building a Splunk dashboard that displays a table of content, once it's displayed I want to have couple of buttons as Stop All and Start All, while clicking the same this in turn execute a searc... See more...
I am building a Splunk dashboard that displays a table of content, once it's displayed I want to have couple of buttons as Stop All and Start All, while clicking the same this in turn execute a search to invoke a Python code to perform the actions. Please can someone guide if that's possible?
Thank you for the advise. We will proof it with the customer as soon as I can and will respond.
hi, Wondering if there is a document or guidance on how to estimate the  volume of data ingested in Splunk by pulling data from DNA Centre using the Splunk Add-on: Cisco DNA Center Add-on. Cheers, ... See more...
hi, Wondering if there is a document or guidance on how to estimate the  volume of data ingested in Splunk by pulling data from DNA Centre using the Splunk Add-on: Cisco DNA Center Add-on. Cheers, Ahmed.
Note: I have an active token that looks similar to this: c0865140-53b4-4b53-a2d1-9571d39a5de8 My HTTP request has the following header: Authorization: Splunk c0865140-53b4-4b53-a2d1-9571d39a5de8 ... See more...
Note: I have an active token that looks similar to this: c0865140-53b4-4b53-a2d1-9571d39a5de8 My HTTP request has the following header: Authorization: Splunk c0865140-53b4-4b53-a2d1-9571d39a5de8 MY Splunk Cloud settings show HEC configuration to have SSL enabled and port 8088 (though these settings are grayed out and cannot be adjusted)
Hi Ismo, I am working on developing an app that updates the values in the inputs.conf file from the setup.xml configuration. Additionally, the app retrieves values from the inputs.conf file and load... See more...
Hi Ismo, I am working on developing an app that updates the values in the inputs.conf file from the setup.xml configuration. Additionally, the app retrieves values from the inputs.conf file and loads them into Splunk.
Hi all, I just started a trial for Splunk Cloud , my URL looks similar to this: https://prd-p-s8qvw.splunkcloud.com/en-GB/app/launcher/home   I want to get data in with the HEC. I have read all t... See more...
Hi all, I just started a trial for Splunk Cloud , my URL looks similar to this: https://prd-p-s8qvw.splunkcloud.com/en-GB/app/launcher/home   I want to get data in with the HEC. I have read all the following documentation: https://docs.splunk.com/Documentation/SplunkCloud/9.3.2408/Data/UsetheHTTPEventCollector#Configure_HTTP_Event_Collector_on_Splunk_Cloud_Platform According to the documentation, my URL should look like this: https://http-inputs-prd-p-s8qvw.splunkcloud.com:8088/services/collector/event However this does not work. It seems the DNS cannot be resolved. My NodeJS gives "ENOTFOUND" I have tried different options (HHTP / HTTPS, host, port etc): HTTP: http://http-inputs-prd-p-s8qvw.splunkcloud.com:8088/services/collector/event HTTPS: https://http-inputs-prd-p-s8qvw.splunkcloud.com:8088/services/collector/event GCP: http://http-inputs.prd-p-s8qvw.splunkcloud.com:8088/services/collector/event https://http-inputs.prd-p-s8qvw.splunkcloud.com:8088/services/collector/event host: http://http-inputs-prd-p-s8qvw.splunkcloud.com:8088/services/collector/event http://http-inputs-p-s8qvw.splunkcloud.com:8088/services/collector/event http://http-inputs-s8qvw.splunkcloud.com:8088/services/collector/event http://http-inputs.s8qvw.splunkcloud.com:8088/services/collector/event https://http-inputs-prd-p-s8qvw.splunkcloud.com:8088/services/collector/event https://http-inputs-p-s8qvw.splunkcloud.com:8088/services/collector/event https://http-inputs-s8qvw.splunkcloud.com:8088/services/collector/event https://http-inputs.s8qvw.splunkcloud.com:8088/services/collector/event port: http://http-inputs-prd-p-s8qvw.splunkcloud.com:443/services/collector/event https://http-inputs-prd-p-s8qvw.splunkcloud.com:443/services/collector/event http://http-inputs.prd-p-s8qvw.splunkcloud.com:443/services/collector/event https://http-inputs.prd-p-s8qvw.splunkcloud.com:443/services/collector/event http://http-inputs-prd-p-s8qvw.splunkcloud.com:443/services/collector/event http://http-inputs-p-s8qvw.splunkcloud.com:443/services/collector/event http://http-inputs-s8qvw.splunkcloud.com:443/services/collector/event http://http-inputs.s8qvw.splunkcloud.com:443/services/collector/event https://http-inputs-prd-p-s8qvw.splunkcloud.com:443/services/collector/event https://http-inputs-p-s8qvw.splunkcloud.com:443/services/collector/event https://http-inputs-s8qvw.splunkcloud.com:443/services/collector/event https://http-inputs.s8qvw.splunkcloud.com:443/services/collector/event No prefix: http://prd-p-s8qvw.splunkcloud.com:8088/services/collector/event https://prd-p-s8qvw.splunkcloud.com:8088/services/collector/event http://prd-p-s8qvw.splunkcloud.com:8088/services/collector/event https://prd-p-s8qvw.splunkcloud.com:8088/services/collector/event http://prd-p-s8qvw.splunkcloud.com:8088/services/collector/event http://p-s8qvw.splunkcloud.com:8088/services/collector/event http://s8qvw.splunkcloud.com:8088/services/collector/event http://s8qvw.splunkcloud.com:8088/services/collector/event https://prd-p-s8qvw.splunkcloud.com:8088/services/collector/event https://p-s8qvw.splunkcloud.com:8088/services/collector/event https://s8qvw.splunkcloud.com:8088/services/collector/event https://s8qvw.splunkcloud.com:8088/services/collector/event http://prd-p-s8qvw.splunkcloud.com:443/services/collector/event https://prd-p-s8qvw.splunkcloud.com:443/services/collector/event http://prd-p-s8qvw.splunkcloud.com:443/services/collector/event https://prd-p-s8qvw.splunkcloud.com:443/services/collector/event http://prd-p-s8qvw.splunkcloud.com:443/services/collector/event http://p-s8qvw.splunkcloud.com:443/services/collector/event http://s8qvw.splunkcloud.com:443/services/collector/event http://s8qvw.splunkcloud.com:443/services/collector/event https://prd-p-s8qvw.splunkcloud.com:443/services/collector/event https://p-s8qvw.splunkcloud.com:443/services/collector/event https://hs8qvw.splunkcloud.com:443/services/collector/event https://s8qvw.splunkcloud.com:443/services/collector/event None of these work. All give one of the following errors: Error: getaddrinfo ENOTFOUND http-inputs-prd-p-s8qvw.splunkcloud.com Error: read ECONNRESET HTTP 400 Sent HTTP to port 443 HTTP 404 Not Found Can anybody help me get this working?   Regards,   Lawrence