All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you, Giuseppe!
Just a day ago, I migrated from 9.1.5 to 9.1.6. I confirm an absence of zombie processes!
Thanks for your help, I really appreciate the time you put it. I asked in good faith I can learn this. streaming languages — like JQ I mentioned — are harder, yes, functional languages are harder, y... See more...
Thanks for your help, I really appreciate the time you put it. I asked in good faith I can learn this. streaming languages — like JQ I mentioned — are harder, yes, functional languages are harder, yes, but all is doable. Yes, shown json is tricker, but I just happen to know, that there will be just one query there, and if not, I can call one more reduce in JQ and I'm good, still oneliner. Regarding the SPL — the spl solution is just crazy. I just cannot see what each individual part does and why. And the documentation does it make it harder instead of easier. But I probably miss some basic premise of splunk. Ok, simple sample, simpler task.       | makeresults | eval _raw="{ \"json\": { \"class\": \"net.ttddyy.dsproxy.support.SLF4JLogUtils\", }, }"       Use regex to remove word ttddyy and return .json.class. Should be exceptionally trivial. If I ran that just thing above I see the json. OK, now projection:       … | table _raw.json.class       no.       … | table json.class       no.       ... | table _raw       yes. OK, so maybe it's not parsed or whatever. spath to the rescue.       … | spath _raw.json.path | table json.path //please consider this as all potential combinations of cartesian products of all subpaths from _raw.json.path and same with table.       no! Ok(!!) maybe it's input parameter:       ... | spath input=_raw.json path=json.class | table json.class //all cartesian products again.       no.  ok. .. So maybe we need output for something I don't know what:       | spath input=_raw path=json.class output=aaaaa | table aaaaa //all cartesian products of subpaths again.       30 minutes passed... It's just-a-simple-projection.  No luck and I didn't even get to regex, which will be true struggle (all regex flavors are really easy, that's not the crux of the problem). chatgpt thinks this is the solution:       | makeresults | eval _raw="{\"json\": {\"class\": \"net.ttddyy.dsproxy.support.SLF4JLogUtils\"}}" | spath input=_raw path=json.class | table json.class       but it isn't. Does not print anything(it's so confusing, that trained chatgpt cannot do projection OR it does not work on our heavily paid sw). Can you please explain, how simple projection works in splunk and what steps the engine really do internally since it's really mystery? In jq to compare:       jq -n '{ "json": { "class": "net.ttddyy.dsproxy.support.SLF4JLogUtils", } } | .json | .class | sub("ttddyy";"goddamm-easy")'       to explain: 1) declare json 2) take .json subtree 3) take .class subtree 4) do replacement. 40s, straightforward. UPDATE: if I export data returned for table, and then download the data, I got different results in file than on screen, so I guess that what-is-shown on screen is different to what-the-data-of-query-is, which is probably source of confusion, since I'm working with something I can see, while splunk probably work with some datastructure I'm not aware of. { "preview": false, "result": { "json.msg": //:| ...
No, wait. source _is_ a metadata field already. You can use transforms to either cut it as you initially planned or to extract data from it to another indexed field. You can also use EXTRACT or REPO... See more...
No, wait. source _is_ a metadata field already. You can use transforms to either cut it as you initially planned or to extract data from it to another indexed field. You can also use EXTRACT or REPORT to extract the field in search time. There are many possibilities here.
I'll probably make a meta field as you suggested, I didn't  wanted to do it at the start but it seems the only way.
Hi, this is my 1st post, I'm a newbie splunkers. I have a case from my clients so, the splunk is running with LB following with the SH cluster. I already using LDAP to inject the data for login ac... See more...
Hi, this is my 1st post, I'm a newbie splunkers. I have a case from my clients so, the splunk is running with LB following with the SH cluster. I already using LDAP to inject the data for login access account in splunk.  When I checked out the audittrail log in query table, it's showing only 1 spesific clientip or src. That was different with the 1st time I inject the AD for login access to splunk, or inside the dev server because we only use AIO/standalone splunk in dev. It's showing the real IP of the user. But now, when I logged in to the splunk web, the audit trail log, will show the spesific 1 IP, I think it's LB or AD IP.  Even I used the native user like "admin", it will show only 1 IP, and it's not my device IP. How to make the real IP  fromuser showing, while using LB in shcluster instead of only 1 IP from LB or AD in Audittrail log?
Hi @PickleRick  As your information Yesterday if my inputs.conf will mess the sourcetype so i was assesment all sourcetype was generated in my searchhead.  Could you please correction my inputs.con... See more...
Hi @PickleRick  As your information Yesterday if my inputs.conf will mess the sourcetype so i was assesment all sourcetype was generated in my searchhead.  Could you please correction my inputs.conf ? here  [monitor:///var/log/audit/audit.log] disabled = false index = NewIndex sourcetype = linux_audit [monitor:///var/log/auth.log] disabled = false index = NewIndex sourcetype = auth-too_small [monitor:///var/log/cron] disabled = false index = NewIndex sourcetype = kern-too_small [monitor:///var/log/kern.log] disabled = false index = NewIndex sourcetype = kern-too_small [monitor:///var/log/messages] disabled = false index = NewIndex sourcetype = syslog [monitor:///var/log/mongodb/mongod.log] disabled = false index = NewIndex sourcetype = mongod-2 [monitor:///var/log/nginx/access.log] disabled = false index = NewIndex sourcetype = access_combined [monitor:///var/log/nginx/error-NewIndex-fe.log] disabled = false index = NewIndex sourcetype = error-NewIndex-fe-too_small [monitor:///var/log/nginx/jm-click-fe.log] disabled = false index = NewIndex sourcetype = jm-click-fe-too_small [monitor:///var/log/nginx/NewIndex-ess-http-3001.log] disabled = false index = NewIndex sourcetype = NewIndex-ess-http-too_small [monitor:///var/log/nginx/NewIndex-ess-pakta-http.log.1] disabled = false index = NewIndex sourcetype = NewIndex-ess-http-too_small [monitor:///var/log/nginx/NewIndex-jmpd-http.log] disabled = false index = NewIndex sourcetype = access_combined [monitor:///var/log/nginx/NewIndex-be.log] disabled = false index = NewIndex sourcetype = access_combined [monitor:///var/log/nginx/NewIndex-cms-be.log] disabled = false index = NewIndex sourcetype = NewIndex-cms-be-too_small [monitor:///var/log/redis/redis-server.log] disabled = false index = NewIndex sourcetype = redis-server-too_small [monitor:///var/log/sssd/sssd_NewIndex.co.id.log] disabled = false index = NewIndex sourcetype = sssd_NewIndex.co.id-too_small [monitor:///var/log/syslog] disabled = false index = NewIndex sourcetype = syslog [monitor:///var/log/ubuntu-advantage-timer.log] disabled = false index = NewIndex sourcetype = ubuntu-advantage-timer.log-3 [monitor:///var/log/ubuntu-advantage.log] disabled = false index = NewIndex sourcetype = ubuntu-advantage-6 [monitor:///var/log/ufw.log] disabled = false index = NewIndex sourcetype = syslog [monitor:///var/log/unattended-upgrades/unattended-upgrades.log] disabled = false index = NewIndex sourcetype = unattended-upgrades [monitor:///var/log/vmware-vmtoolsd-root.log] disabled = false index = NewIndex sourcetype = vmware-vmtoolsd-root [monitor:///home/*/.bash_history] disabled = false index = NewIndex sourcetype = bash_history Or maybe you have best practice setting for my case ? 
This worked for me on my updated version - thanks!
I am using the following html for my alert action data entry screen.  The tenant mulit-select does not show up in the configuration dictionary of the payload object passed to the python script.  What... See more...
I am using the following html for my alert action data entry screen.  The tenant mulit-select does not show up in the configuration dictionary of the payload object passed to the python script.  What am I doing wrong? Payload passed to python script: Payload: {'app': 'search', 'owner': 'jon_fournet@bmc.com', 'result_id': '1', 'results_file': '/opt/splunk/var/run/splunk/dispatch/rt_scheduler_am9uX2ZvdXJuZXRAYm1jLmNvbQ__search__sentToBHOM12_at_1727135173_17.19/per_result_alert/tmp_1.csv.gz', 'results_link': 'http://clm-aus-wm6fwd:8000/app/search/search?q=%7Cloadjob%20rt_scheduler_am9uX2ZvdXJuZXRAYm1jLmNvbQ__search__sentToBHOM12_at_1727135173_17.19%20%7C%20head%202%20%7C%20tail%201&earliest=0&latest=now', 'search_uri': '/servicesNS/jon_fournet%40bmc.com/search/saved/searches/sentToBHOM12', 'server_host': 'clm-aus-wm6fwd', 'server_uri': 'https://127.0.0.1:8089', 'session_key': 'juYpGOJO29CVEJXEhNFtlVZu0NdAUtGRObXSddXgB^nwDFZHofpZ58tDr^dfFRHcAeBKb3sKvtUNY48u7z2go^bDjUIR1K59YJhT3mkpPKXm3Vom_mXwSCA5rF2AQsgeoEuM332jKYMhEiZRakt1Qs69if_wD_QAPo', 'sid': 'rt_scheduler_am9uX2ZvdXJuZXRAYm1jLmNvbQ__search__sentToBHOM12_at_1727135173_17.19', 'search_name': 'sentToBHOM12', 'configuration': {'additional_info': 'This is an additional slot', 'category': 'AVAILABILITY_MANAGEMENT', 'ciid': 'test ciid', 'citype': 'testcitype', 'hostname': 'splunktesthost', 'logLevel': 'WARN', 'message': ' kkkk', 'object': 'testobject', 'originuri': 'testuri', 'severity': 'WARNING', 'subcategory': 'APPLICATION'}   HTML: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Information</title> <style> body { background-color: lightblue; font-family: Arial, sans-serif; } .container { width: 80%; margin: 20px auto; } .section { background-color: white; padding: 15px; margin-bottom: 20px; border: 2px solid black; border-radius: 5px; } .section h2 { margin-top: 0; } </style> </head> <body> <form class="form-horizontal form-complex"> <h1>BHOM Tenant Configuration</h1> <div class="control-group"> <label class="control-label" for="bmc_tenants">Tenants</label> <div class="controls"> <select id="bmc_tenants" name="action.sendToBHOM.param.tenants" multiple size="3"> <option value="prod">Production</option> <option value="qa">QA</option> <option value="dev">Development</option> </select> <span class="help-block">The BHOM Tenants to forward alerts</span> </div> </div> <h1>BHOM Event Configuration</h1> <div class="control-group"><label class="control-label" for="bmc_severity">Severity</label> <div class="controls"><select id="bmc_severity" name="action.sendToBHOM.param.severity"> <option value="OK">Ok</option> <option value="WARNING">Warning</option> <option value="MINOR">Minor</option> <option value="MAJOR">Major</option> <option value="CRITICAL">Critical</option> </select><span class="help-block">The severity of the alert</span></div> </div> <div class="control-group"><label class="control-label" for="bmc_hostname">Source Hostname</label> <div class="controls"><input id="bmc_hostname" name="action.sendToBHOM.param.hostname" type="text" placeholder="e.g. splunk.bmc.com " /> <span class="help-block">The Hostname of the source of the alert</span></div> </div> <div class="control-group"><label class="control-label" for="bmc_object">Object</label> <div class="controls"><input id="bmc_object" name="action.sendToBHOM.param.object" type="text" placeholder="e.g. Splunk_log_1 " /> <span class="help-block">The Object related to the alert</span></div> </div> <div class="control-group"> <div class="control-group"><label class="control-label" for="bmc_category">Category</label> <div class="controls"><input id="bmc_category" name="action.sendToBHOM.param.category" type="text" placeholder="e.g. splunk.bmc.com " /> <span class="help-block">The Category related to the alert</span></div> </div> <div class="control-group"><label class="control-label" for="bmc_subcategory">Sub-Category</label> <div class="controls"><input id="bmc_subcategory" name="action.sendToBHOM.param.subcategory" type="text" placeholder="e.g. splunk.bmc.com " /> <span class="help-block">The Sub-Category related to the alert</span></div> </div> <div class="control-group"><label class="control-label" for="bmc_originuri">Origin URI</label> <div class="controls"><input id="bmc_originuri" name="action.sendToBHOM.param.originuri" type="text" placeholder="e.g. splunk.bmc.com " /> <span class="help-block">The Origin URI related to the alert</span></div> </div> <div class="control-group"><label class="control-label" for="bmc_ciid">CI ID</label> <div class="controls"><input id="bmc_ciid" name="action.sendToBHOM.param.ciid" type="text" placeholder="e.g. splunk.bmc.com " /> <span class="help-block">The CI ID related to the alert</span></div> </div> <div class="control-group"><label class="control-label" for="bmc_citype">CI Type</label> <div class="controls"><input id="bmc_citype" name="action.sendToBHOM.param.citype" type="text" placeholder="e.g. splunk.bmc.com " /> <span class="help-block">The CI Type related to the alert</span></div> </div> <div class="control-group"><label class="control-label" for="bmc_event_message">Message</label> <div class="controls"><textarea id="bmc_event_message" style="height: 120px;" name="action.sendToBHOM.param.message"> </textarea><span class="help-block">The message for the event send to BHOM</span</div> </div> </div> <div class="control-group"><label class="control-label" for="bmc_additional_info">Additional Info</label> <div class="controls"><input id="bmc_additional_info" name="action.sendToBHOM.param.additional_info" type="text" placeholder="e.g. splunk.bmc.com " /> <span class="help-block">The Additional Information related to the alert</span></div> </div> </div> <h1>Log Level (logs written to index _internal)</h1> <label for="logLevel">Choose a log level:</label> <select id="logLevel" name="action.sendToBHOM.param.logLevel"> <option value="INFO">INFO</option> <option value="WARN">WARNING</option> <option value="ERROR" selected>ERROR</option> <option value="DEBUG">DEBUG</option> </select> </body> </html>  
Hi @LY, The MIME type error should have been preceded by a status 404. console.js has been removed (technically, quarantined) from current versions of Splunk Enterprise. All modern browsers should ... See more...
Hi @LY, The MIME type error should have been preceded by a status 404. console.js has been removed (technically, quarantined) from current versions of Splunk Enterprise. All modern browsers should have a native console object, and you can remove the util/console requirement from tokenlinks.js: --- tokenlinks.js.original 2024-09-23 19:40:34.985325558 -0400 +++ tokenlinks.js 2024-09-23 19:40:59.456196826 -0400 @@ -1,10 +1,9 @@ requirejs([ '../app/simple_xml_examples/libs/jquery-3.6.0-umd-min', '../app/simple_xml_examples/libs/underscore-1.6.0-umd-min', - 'util/console', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' -], function($, _, console, mvc) { +], function($, _, mvc) { function setToken(name, value) { console.log('Setting Token %o=%o', name, value); @@ -51,4 +50,4 @@ } } }); -}); \ No newline at end of file +});  You can also remove the console.log() and console.warn() lines if you don't need them.
This error could be caused by a few things, do you have updated protocol? Do you have all the certs required? Are you actually routing through a proxy? Are there any more errors than that?
There should be an error in splunkd when you get redirected to unauthorized that states what user it was trying to log in as. Also if you changed it from samaccountname to userprincipalname you will ... See more...
There should be an error in splunkd when you get redirected to unauthorized that states what user it was trying to log in as. Also if you changed it from samaccountname to userprincipalname you will have to modify it on the AD/ADFS side as well.
Here's an alternative that uses a few helper macros to replace the bitwise eval functions. Bit rotate functions would be a nice addition to Splunk, as would a parameter on all bitwise functions to sp... See more...
Here's an alternative that uses a few helper macros to replace the bitwise eval functions. Bit rotate functions would be a nice addition to Splunk, as would a parameter on all bitwise functions to specify width. | makeresults | eval HEX_Code="0002" ``` convert to number ``` | eval x=tonumber(HEX_Code, 16) ``` swap bytes ``` | eval t=`bitshl(x, 8)`, x=`bitshr(x, 8)`+`bitand_16(t, 65280)` ``` calculate number of trailing zeros (ntz) ``` | eval t=65535-x+1, y=`bitand_16(x, t)` | eval bz=if(y>0, 0, 1), b3=if(`bitand_16(y, 255)`>0, 0, 8), b2=if(`bitand_16(y, 3855)`>0, 0, 4), b1=if(`bitand_16(y, 13107)`>0, 0, 2), b0=if(`bitand_16(y, 21845)`>0, 0, 1) | eval ntz=bz+b3+b2+b1+b0 ``` ntz=9 ``` # macros.conf [bitand_16(2)] args = x, y definition = sum(1 * (floor($x$ / 1) % 2) * (floor($y$ / 1) % 2), 2 * (floor($x$ / 2) % 2) * (floor($y$ / 2) % 2), 4 * (floor($x$ / 4) % 2) * (floor($y$ / 4) % 2), 8 * (floor($x$ / % 2) * (floor($y$ / % 2), 16 * (floor($x$ / 16) % 2) * (floor($y$ / 16) % 2), 32 * (floor($x$ / 32) % 2) * (floor($y$ / 32) % 2), 64 * (floor($x$ / 64) % 2) * (floor($y$ / 64) % 2), 128 * (floor($x$ / 128) % 2) * (floor($y$ / 128) % 2), 256 * (floor($x$ / 256) % 2) * (floor($y$ / 256) % 2), 512 * (floor($x$ / 512) % 2) * (floor($y$ / 512) % 2), 1024 * (floor($x$ / 1024) % 2) * (floor($y$ / 1024) % 2), 2048 * (floor($x$ / 2048) % 2) * (floor($y$ / 2048) % 2), 4096 * (floor($x$ / 4096) % 2) * (floor($y$ / 4096) % 2), 8192 * (floor($x$ / 8192) % 2) * (floor($y$ / 8192) % 2), 16384 * (floor($x$ / 16384) % 2) * (floor($y$ / 16384) % 2), 32768 * (floor($x$ / 32768) % 2) * (floor($y$ / 32768) % 2)) iseval = 0 [bitshl(2)] args = x, k definition = floor(pow(2, $k$) * $x$) iseval = 0 [bitshr(2)] args = x, k definition = floor(pow(2, -$k$) * $x$) iseval = 0  
index=_internal source=*license_usage.log type="Usage" | eval indexname = if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | eval sourcetypename = st | bin _time span=1d | stats sum(b) as b by _ti... See more...
index=_internal source=*license_usage.log type="Usage" | eval indexname = if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | eval sourcetypename = st | bin _time span=1d | stats sum(b) as b by _time, pool, indexname, sourcetypename | eval GB=round(b/1024/1024/1024, 3) | fields _time, indexname, sourcetypename, GB
Use INDEXED_EXTRACTIONS = CSV in porps.conf for your sourcetype and push it to Universal Forwarder too along with inputs.conf props.conf [DataDeletion] INDEXED_EXTRACTIONS = CSV FIELD_DELIMITER ... See more...
Use INDEXED_EXTRACTIONS = CSV in porps.conf for your sourcetype and push it to Universal Forwarder too along with inputs.conf props.conf [DataDeletion] INDEXED_EXTRACTIONS = CSV FIELD_DELIMITER = , FIELD_NAMES = field1, field2, field3, field4 # (Replace with actual field names) TIME_FORMAT = %Y-%m-%d %H:%M:%S # (Adjust based on your timestamp format) TIMESTAMP_FIELDS = timestamp_field # (Replace with the actual field containing the timestamp) ------ If you find this solution helpful, please consider accepting it and awarding karma points !!
Try this: | rest /services/authentication/users | rename title as user | table user realname roles email | join type=left user [search index=_audit sourcetype=audittrail action=success AND info... See more...
Try this: | rest /services/authentication/users | rename title as user | table user realname roles email | join type=left user [search index=_audit sourcetype=audittrail action=success AND info=succeeded | stats max(_time) as last_login_time by user | where last_login_time > relative_time(now(), "-7d") | table user last_login_time ] | where isnull(last_login_time) OR last_login_time < relative_time(now(), "-7d") ------ If you find this solution helpful, please consider accepting it and awarding karma points !!
Finding something that is not there is not Splunk's strong suit.  See this blog entry for a good write-up on it. https://www.duanewaddle.com/proving-a-negative/ In this case, what you have just n... See more...
Finding something that is not there is not Splunk's strong suit.  See this blog entry for a good write-up on it. https://www.duanewaddle.com/proving-a-negative/ In this case, what you have just needs a little tweaking. index=_audit sourcetype=audittrail action=success AND info=succeeded | eval secondsSinceLastSeen=now()-_time | stats count, min(secondsSinceLastSeen) as secondsSinceLastSeen BY user | append [| rest splunk_server=local /services/authentication/users | rename title as user | eval count=0 | fields user count ] | stats sum(count) AS total BY user | where total=0  
I want to show which User not logged into Splunk for last 30 or 90days in splunk For example: we have 300 user have access to splunk UI, I want to know who is not logged into splunk more than 7 day... See more...
I want to show which User not logged into Splunk for last 30 or 90days in splunk For example: we have 300 user have access to splunk UI, I want to know who is not logged into splunk more than 7 days  Below query will show who has logged into splunk, but i wanted to show the who is not logged and last login time information. index=_audit sourcetype=audittrail action=success AND info=succeeded | eval secondsSinceLastSeen=now()-_time | eval timeSinceLastSeen=tostring(secondsSinceLastSeen, "duration") | stats count BY user timeSinceLastSeen | append [| rest /services/authentication/users | rename title as user | eval count=0 | fields user ] | stats sum(count) AS total BY user timeSinceLastSeen
I have had a few issues ingesting data into the correct index. We are deploying an app from the deployment server, and this particular app has two clients. Initially, when I set this app up, I was in... See more...
I have had a few issues ingesting data into the correct index. We are deploying an app from the deployment server, and this particular app has two clients. Initially, when I set this app up, I was ingesting data into our o365 index. This data looked somewhat like: We have a team running a script that tracks all deleted files. We were getting in one line per event. And at the time, I had the inputs.conf that looked like: [monitor://F:\scripts\DataDeletion\SplunkReports] index=o365 disabled=false source=DataDeletion It would ingest all CSV files within that DataDeletion Directory. In this case, it ingested everything under that directory. This worked.  I changed the index to testing so i could manage the new data a bit better while we were still testing it. One inputs.conf backup shows that i had this at some point: [monitor://F:\scripts\DataDeletion\SplunkReports\*.csv] index=testing disabled=false sourcetype=DataDeletion crcSalt = <string>   Now months later, I have changed the inputs.conf to ingest everything into the o365 index, and i have applied that change and pushed it out to the class using the Deployment server, and yet the most recent data looks different. The last events we ingested went into the testing index and looked like: This may be due to how the script is sending data into splunk, but it looks like its aggregating hundreds of separate lines into one event. My inputs.conf looks like this currently: [monitor://F:\scripts\DataDeletion\SplunkReports\*] index = o365 disabled = 0 sourcetype = DataDeletion crcSalt = <SOURCE> recursive = true #whitelist = \.csv [monitor://F:\SCRIPTS\DataDeletion\SplunkReports\*] index = o365 disabled = 0 sourcetype = DataDeletion crcSalt = <SOURCE> recursive = true #whitelist = \.csv [monitor://D:\DataDeletion\SplunkReports\*] index = o365 disabled = 0 sourcetype = DataDeletion crcSalt = <SOURCE> recursive = true #whitelist = \.csv   I am just trying to grab everything under D:\DataDeletion\SplunkReports\ on the new windows servers, and ingest all of the csv files under there, breaking up each line in the csv into a new event. What is the proper syntax for this inputs, what am i doing wrong, I have tried a few things and none of them see to work. Ive tried adding a whitelist, adding a blacklist, I have recursive and crcSalt there just to grab anything and everything.  and if the script isnt at fault at sending in chunks of data in one event, would adding a props.conf fix how Splunk is ingesting this data? Thanks for any help. 
You can rewrite any metadata field including source, sourcetype and host using transforms. But, to be honest, I don't understand why you would want to lose information (the actual source file). You ... See more...
You can rewrite any metadata field including source, sourcetype and host using transforms. But, to be honest, I don't understand why you would want to lose information (the actual source file). You can always extract that info in search time if you want just the directory.