All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

HF1 is with sender Add-on and configured outputs.conf with udp and input ip interface (default configurations) - Not working  We have checked the connectivity with command "nc -vzu host port " the u... See more...
HF1 is with sender Add-on and configured outputs.conf with udp and input ip interface (default configurations) - Not working  We have checked the connectivity with command "nc -vzu host port " the udp port and its showing open any ideas !!!
HI After trying this is does not work sorry - any other ideas?   Thanks Robert
Hi @selvam_sekar, did you explored the timewrap command at https://docs.splunk.com/Documentation/Splunk/9.2.0/SearchReference/Timewrap ? Ciao. Giuseppe
I have a challenge:  When somebody are doing changes to our AD, it is done using a cyberark account. In order to finde the person behind the cyberark account, i need to go back and find the event we... See more...
I have a challenge:  When somebody are doing changes to our AD, it is done using a cyberark account. In order to finde the person behind the cyberark account, i need to go back and find the event were a person checks out an account.  So i have and AD change at 01.27 with user=pam-serveradmin01   and from cyberark at 01.05 account=pam-serveradmin and user=clt How would you build this query 
{ "visualizations": { "viz_1putkd4H": { "type": "splunk.table", "options": {}, "dataSources": { "primary": "ds_P8DuhImO" } } }, "dataSources": { "ds_P8DuhImO": { "type": "... See more...
{ "visualizations": { "viz_1putkd4H": { "type": "splunk.table", "options": {}, "dataSources": { "primary": "ds_P8DuhImO" } } }, "dataSources": { "ds_P8DuhImO": { "type": "ds.search", "options": { "query": "| makeresults\n| fields - _time\n| addinfo\n| rename info_min_time as earliest\n| rename info_max_time as latest\n| fieldformat earliest=strftime(earliest,\"%F %T\")\n| fieldformat latest=strftime(latest,\"%F %T\")\n| table earliest latest" }, "name": "time_selected" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" } }, "layout": { "type": "absolute", "options": { "display": "auto-scale", "height": 1200 }, "structure": [ { "item": "viz_1putkd4H", "type": "block", "position": { "x": 0, "y": 0, "w": 1200, "h": 90 } } ], "globalInputs": [ "input_global_trp" ] }, "description": "", "title": "studio times" }
Totally agree. I value everyone's contribution and restricting my question only to a certain individual will only delay or prolong the process. Apologies for that. This won't be repeated. Let me try ... See more...
Totally agree. I value everyone's contribution and restricting my question only to a certain individual will only delay or prolong the process. Apologies for that. This won't be repeated. Let me try @gcusello's solution and get back! Thanks guys!
It looks like @gcusello has provided a good answer - It is probably best not to call out individuals when first posing a question; we are all volunteers here and you don't know when the requested vol... See more...
It looks like @gcusello has provided a good answer - It is probably best not to call out individuals when first posing a question; we are all volunteers here and you don't know when the requested volunteer is going to be available, and others may feel that they shouldn't answer when the question is directed to particular volunteers (do you not value others' contributions?)
Hi @koyachi, you can list all the Forwarders that sent logs in a period (e.g. 90 days) and that don't send logs from 7 days using a search like the following | tstats count latest(_time) AS latest ... See more...
Hi @koyachi, you can list all the Forwarders that sent logs in a period (e.g. 90 days) and that don't send logs from 7 days using a search like the following | tstats count latest(_time) AS latest WHERE earliest=-90d@d latest=now BY host | where latest<now()-(86400*7) | eval latest=strftime(latest,"%Y-%m-%d %H:%M:%S") | table host latest Obviously, you can use the time periods you need in this search. Ciao. Giuseppe
Is there a way to do this already? Stuck at the exact same point.
If I understand correctly, you want to extract with the special character into new_field, so that you can replace the special characters more easily? Try something like this | eval new = if(sourcet... See more...
If I understand correctly, you want to extract with the special character into new_field, so that you can replace the special characters more easily? Try something like this | eval new = if(sourcetype=="custom:data", mvmap(old_field,replace(old_field,"\x7b.*?\x22bundle\x22\x3a\s+\x22((?:net|jp|uk|fr|se|org|com|gov)\x2e(\w+)\x2e.*?)\x22.*?name\x22\x3a(?:\s+)?\x22([^\x22]+)\x22.*?\x22sw_uid\x22\x3a(?:\s+)?\x22(([a-fA-F0-9]+)|[\w_:]+)\x22.*?\x22version\x22\x3a(?:\s+)?\x22(.*?)\x22.*$","cpe:2.3:a:\2:\3:\5:*:*:*:*:*:*:* - \1 - \4")),new) Note that there was also a mistake in the fourth group as this should not have been a non-capture group.
Hi Folks, We have thousands of universal forwarders that are currently running on old version (7.0.2). We are planning to upgrade universal forwarders to most recent version but before we do that we... See more...
Hi Folks, We have thousands of universal forwarders that are currently running on old version (7.0.2). We are planning to upgrade universal forwarders to most recent version but before we do that we would like to reduce the overall footprint of universal forwarders by uninstalling them from the servers that are no longer sending logs.  Logs for few applications and infrastructure are migrated to Azure so they are no longer sending it to splunk. Need to find a list of such servers so i can uninstall them before i do mass upgrade. Is there a query that can give me the list of hostname along with timestamp of last log that it sent. Thanks in advance
1. Do you use indexed extractions or not? 2. Do you have time extraction properly configured (TIME_PREFIX, TIME_FORMAT, MAX_TIMESTAMP_LOOKAHEAD)?
Are you sure that the file is _rotated_ (as in renamed and compressed)? Because that behaviour is pretty consistent with the "copytruncate" behaviour of logrotate when the contents of the file are co... See more...
Are you sure that the file is _rotated_ (as in renamed and compressed)? Because that behaviour is pretty consistent with the "copytruncate" behaviour of logrotate when the contents of the file are copied out to a new file and the file is truncated afterwards. In such case the file descriptor does not change but Splunk suddenly finds itself after the end of the data so most probably assumes that it had already read all the data there was.
Ok. It is binary on the wire. It's just escaped either on input or when being presented in search (I never remember if Splunk does escape such stuff or input or stores it raw). You can just run the ... See more...
Ok. It is binary on the wire. It's just escaped either on input or when being presented in search (I never remember if Splunk does escape such stuff or input or stores it raw). You can just run the tcpdump  on the Splunk's side - it should be the same of course
First question is whether you indeed have special characters which are displayed this way or whether they were rendered before/on ingest and are stored as literal "\xsomething" strings. Because that ... See more...
First question is whether you indeed have special characters which are displayed this way or whether they were rendered before/on ingest and are stored as literal "\xsomething" strings. Because that will change the way you must match them.  
Its not binary, more like hex-encoded, see below: \x00}\x00\x00ye\xBBE\x9A9\xEA!\xBE<\x8F$W\xBB\xC9EP\xA3\x8Ff\xECn_\x9D\xEB\xE8\xF8i\xDE\xD7\x00\x00,\x00\x9F\x00k\x00\xA3\x00j\x009\x008\x00... See more...
Its not binary, more like hex-encoded, see below: \x00}\x00\x00ye\xBBE\x9A9\xEA!\xBE<\x8F$W\xBB\xC9EP\xA3\x8Ff\xECn_\x9D\xEB\xE8\xF8i\xDE\xD7\x00\x00,\x00\x9F\x00k\x00\xA3\x00j\x009\x008\x00\x9D\x00=\x005\x00\xA2\x00@\x002\x00\x9E\x00g\x003\x00\x9C\x00<\x00/\x00\x00\x00  1. yes, out input.conf attached above. after every change we restart splunk services 2. Were trying to get approval from ePO admin to run wireshark on the server, if not well just generate MER logs and send them back to ePO support
And you have those results in multivalued fields? In separate result rows?
Hi @inayshon, what kind of issue are you experiencing? anyway, you should be able to find your dashboard in the app where you were when you created the dashboard in the "Dashboards" section, obviou... See more...
Hi @inayshon, what kind of issue are you experiencing? anyway, you should be able to find your dashboard in the app where you were when you created the dashboard in the "Dashboards" section, obviously only accessing Splunk using your account and not a different one. If in your app, you haven't the Dashboards section, you can manually write in the url bar: http://<your_splunk_server>:8000/en-US/app/<your_app>/dashboards Ciao. Giuseppe
Having issues accessing my dashboard that I'm seeing usung my coursera course link...