All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This error could be caused by a few things, do you have updated protocol? Do you have all the certs required? Are you actually routing through a proxy? Are there any more errors than that?
There should be an error in splunkd when you get redirected to unauthorized that states what user it was trying to log in as. Also if you changed it from samaccountname to userprincipalname you will ... See more...
There should be an error in splunkd when you get redirected to unauthorized that states what user it was trying to log in as. Also if you changed it from samaccountname to userprincipalname you will have to modify it on the AD/ADFS side as well.
Here's an alternative that uses a few helper macros to replace the bitwise eval functions. Bit rotate functions would be a nice addition to Splunk, as would a parameter on all bitwise functions to sp... See more...
Here's an alternative that uses a few helper macros to replace the bitwise eval functions. Bit rotate functions would be a nice addition to Splunk, as would a parameter on all bitwise functions to specify width. | makeresults | eval HEX_Code="0002" ``` convert to number ``` | eval x=tonumber(HEX_Code, 16) ``` swap bytes ``` | eval t=`bitshl(x, 8)`, x=`bitshr(x, 8)`+`bitand_16(t, 65280)` ``` calculate number of trailing zeros (ntz) ``` | eval t=65535-x+1, y=`bitand_16(x, t)` | eval bz=if(y>0, 0, 1), b3=if(`bitand_16(y, 255)`>0, 0, 8), b2=if(`bitand_16(y, 3855)`>0, 0, 4), b1=if(`bitand_16(y, 13107)`>0, 0, 2), b0=if(`bitand_16(y, 21845)`>0, 0, 1) | eval ntz=bz+b3+b2+b1+b0 ``` ntz=9 ``` # macros.conf [bitand_16(2)] args = x, y definition = sum(1 * (floor($x$ / 1) % 2) * (floor($y$ / 1) % 2), 2 * (floor($x$ / 2) % 2) * (floor($y$ / 2) % 2), 4 * (floor($x$ / 4) % 2) * (floor($y$ / 4) % 2), 8 * (floor($x$ / % 2) * (floor($y$ / % 2), 16 * (floor($x$ / 16) % 2) * (floor($y$ / 16) % 2), 32 * (floor($x$ / 32) % 2) * (floor($y$ / 32) % 2), 64 * (floor($x$ / 64) % 2) * (floor($y$ / 64) % 2), 128 * (floor($x$ / 128) % 2) * (floor($y$ / 128) % 2), 256 * (floor($x$ / 256) % 2) * (floor($y$ / 256) % 2), 512 * (floor($x$ / 512) % 2) * (floor($y$ / 512) % 2), 1024 * (floor($x$ / 1024) % 2) * (floor($y$ / 1024) % 2), 2048 * (floor($x$ / 2048) % 2) * (floor($y$ / 2048) % 2), 4096 * (floor($x$ / 4096) % 2) * (floor($y$ / 4096) % 2), 8192 * (floor($x$ / 8192) % 2) * (floor($y$ / 8192) % 2), 16384 * (floor($x$ / 16384) % 2) * (floor($y$ / 16384) % 2), 32768 * (floor($x$ / 32768) % 2) * (floor($y$ / 32768) % 2)) iseval = 0 [bitshl(2)] args = x, k definition = floor(pow(2, $k$) * $x$) iseval = 0 [bitshr(2)] args = x, k definition = floor(pow(2, -$k$) * $x$) iseval = 0  
index=_internal source=*license_usage.log type="Usage" | eval indexname = if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | eval sourcetypename = st | bin _time span=1d | stats sum(b) as b by _ti... See more...
index=_internal source=*license_usage.log type="Usage" | eval indexname = if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | eval sourcetypename = st | bin _time span=1d | stats sum(b) as b by _time, pool, indexname, sourcetypename | eval GB=round(b/1024/1024/1024, 3) | fields _time, indexname, sourcetypename, GB
Use INDEXED_EXTRACTIONS = CSV in porps.conf for your sourcetype and push it to Universal Forwarder too along with inputs.conf props.conf [DataDeletion] INDEXED_EXTRACTIONS = CSV FIELD_DELIMITER ... See more...
Use INDEXED_EXTRACTIONS = CSV in porps.conf for your sourcetype and push it to Universal Forwarder too along with inputs.conf props.conf [DataDeletion] INDEXED_EXTRACTIONS = CSV FIELD_DELIMITER = , FIELD_NAMES = field1, field2, field3, field4 # (Replace with actual field names) TIME_FORMAT = %Y-%m-%d %H:%M:%S # (Adjust based on your timestamp format) TIMESTAMP_FIELDS = timestamp_field # (Replace with the actual field containing the timestamp) ------ If you find this solution helpful, please consider accepting it and awarding karma points !!
Try this: | rest /services/authentication/users | rename title as user | table user realname roles email | join type=left user [search index=_audit sourcetype=audittrail action=success AND info... See more...
Try this: | rest /services/authentication/users | rename title as user | table user realname roles email | join type=left user [search index=_audit sourcetype=audittrail action=success AND info=succeeded | stats max(_time) as last_login_time by user | where last_login_time > relative_time(now(), "-7d") | table user last_login_time ] | where isnull(last_login_time) OR last_login_time < relative_time(now(), "-7d") ------ If you find this solution helpful, please consider accepting it and awarding karma points !!
Finding something that is not there is not Splunk's strong suit.  See this blog entry for a good write-up on it. https://www.duanewaddle.com/proving-a-negative/ In this case, what you have just n... See more...
Finding something that is not there is not Splunk's strong suit.  See this blog entry for a good write-up on it. https://www.duanewaddle.com/proving-a-negative/ In this case, what you have just needs a little tweaking. index=_audit sourcetype=audittrail action=success AND info=succeeded | eval secondsSinceLastSeen=now()-_time | stats count, min(secondsSinceLastSeen) as secondsSinceLastSeen BY user | append [| rest splunk_server=local /services/authentication/users | rename title as user | eval count=0 | fields user count ] | stats sum(count) AS total BY user | where total=0  
I want to show which User not logged into Splunk for last 30 or 90days in splunk For example: we have 300 user have access to splunk UI, I want to know who is not logged into splunk more than 7 day... See more...
I want to show which User not logged into Splunk for last 30 or 90days in splunk For example: we have 300 user have access to splunk UI, I want to know who is not logged into splunk more than 7 days  Below query will show who has logged into splunk, but i wanted to show the who is not logged and last login time information. index=_audit sourcetype=audittrail action=success AND info=succeeded | eval secondsSinceLastSeen=now()-_time | eval timeSinceLastSeen=tostring(secondsSinceLastSeen, "duration") | stats count BY user timeSinceLastSeen | append [| rest /services/authentication/users | rename title as user | eval count=0 | fields user ] | stats sum(count) AS total BY user timeSinceLastSeen
I have had a few issues ingesting data into the correct index. We are deploying an app from the deployment server, and this particular app has two clients. Initially, when I set this app up, I was in... See more...
I have had a few issues ingesting data into the correct index. We are deploying an app from the deployment server, and this particular app has two clients. Initially, when I set this app up, I was ingesting data into our o365 index. This data looked somewhat like: We have a team running a script that tracks all deleted files. We were getting in one line per event. And at the time, I had the inputs.conf that looked like: [monitor://F:\scripts\DataDeletion\SplunkReports] index=o365 disabled=false source=DataDeletion It would ingest all CSV files within that DataDeletion Directory. In this case, it ingested everything under that directory. This worked.  I changed the index to testing so i could manage the new data a bit better while we were still testing it. One inputs.conf backup shows that i had this at some point: [monitor://F:\scripts\DataDeletion\SplunkReports\*.csv] index=testing disabled=false sourcetype=DataDeletion crcSalt = <string>   Now months later, I have changed the inputs.conf to ingest everything into the o365 index, and i have applied that change and pushed it out to the class using the Deployment server, and yet the most recent data looks different. The last events we ingested went into the testing index and looked like: This may be due to how the script is sending data into splunk, but it looks like its aggregating hundreds of separate lines into one event. My inputs.conf looks like this currently: [monitor://F:\scripts\DataDeletion\SplunkReports\*] index = o365 disabled = 0 sourcetype = DataDeletion crcSalt = <SOURCE> recursive = true #whitelist = \.csv [monitor://F:\SCRIPTS\DataDeletion\SplunkReports\*] index = o365 disabled = 0 sourcetype = DataDeletion crcSalt = <SOURCE> recursive = true #whitelist = \.csv [monitor://D:\DataDeletion\SplunkReports\*] index = o365 disabled = 0 sourcetype = DataDeletion crcSalt = <SOURCE> recursive = true #whitelist = \.csv   I am just trying to grab everything under D:\DataDeletion\SplunkReports\ on the new windows servers, and ingest all of the csv files under there, breaking up each line in the csv into a new event. What is the proper syntax for this inputs, what am i doing wrong, I have tried a few things and none of them see to work. Ive tried adding a whitelist, adding a blacklist, I have recursive and crcSalt there just to grab anything and everything.  and if the script isnt at fault at sending in chunks of data in one event, would adding a props.conf fix how Splunk is ingesting this data? Thanks for any help. 
You can rewrite any metadata field including source, sourcetype and host using transforms. But, to be honest, I don't understand why you would want to lose information (the actual source file). You ... See more...
You can rewrite any metadata field including source, sourcetype and host using transforms. But, to be honest, I don't understand why you would want to lose information (the actual source file). You can always extract that info in search time if you want just the directory.
Well... there are as many "good" answers as there are admins And each approach has probably its pros and cons. Regardless of the actual upgrade schedule it's important - especially if you have a ... See more...
Well... there are as many "good" answers as there are admins And each approach has probably its pros and cons. Regardless of the actual upgrade schedule it's important - especially if you have a big environment - to not just uncontrollably push a new version everywhere but phase the deployment - first some dev environment, then selected few pilot machines, only then the rest of environment. And be prepared to downgrade in case of problems. And for me it's not as much about actual frequency of updates as much as triggers. If there are some vulerabilities (important to you; not all vulnerabilities are exploitable in all environments) patched with new version - upgrade. If there are new functionalities important to you now or in forseeable future - upgrade. If there are important bug fixes - upgrade. Otherwise - "if it ain't broke don't fix it". Mostly. It's good to stay within a maintained version range - you wouldn't want to use 6.x version nowadays unless you have really no other choice. Of course as @gcusello said - you're limited by what versions are supported by your OS and you can't - for example - install a 9.3UF on a RaspberryPi 2 or Windows 2008 32-bit because there is no such version available for those architectures.
Hi @PiotrAp , if you haven't an intermediate HF, you should upgrade to the last Splunk Cloud Version. If you have an intermediate HF, it must be aligned to the Splunk Cloud version, and UFs to the ... See more...
Hi @PiotrAp , if you haven't an intermediate HF, you should upgrade to the last Splunk Cloud Version. If you have an intermediate HF, it must be aligned to the Splunk Cloud version, and UFs to the HF version. I never use the approach ov n-1 version, I always install the last released version. If you can, it's always better upgrate as soon as the new version is released, but I understand that's not possible in a large infrastructiure, so the frequency of once a year is a  good compromise between costs and update necessity. Ciao. Giuseppe
@vjsplunk- Glad you found solution to your problem. Please accept your own answer by clicking "Accept as Answer" so in future community users get benefited from this.   Community Moderator, Vatsal... See more...
@vjsplunk- Glad you found solution to your problem. Please accept your own answer by clicking "Accept as Answer" so in future community users get benefited from this.   Community Moderator, Vatsal Jagani
Hi Giuseppe Many thanks for your reply. So should I update it once a year? If so, should I install the latest possible version or use something like N-1? How do you do this in your environment? We ... See more...
Hi Giuseppe Many thanks for your reply. So should I update it once a year? If so, should I install the latest possible version or use something like N-1? How do you do this in your environment? We have Splunk Cloud version.  
tried those 2 option already with no good results. thank you.
Hi @sverdhan , you asked for a list of sourcetypes. If you want all the sourcetypes, you could try: index=_internal [ rest splunk_server=local /services/server/info | return host] source=*lic... See more...
Hi @sverdhan , you asked for a list of sourcetypes. If you want all the sourcetypes, you could try: index=_internal [ rest splunk_server=local /services/server/info | return host] source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | timechart span=1d sum(b) AS volumeB by h fixedrange=false | fields - _timediff | foreach "*" [ eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)] that's the one that you can find in the license consuming. Ciao. Giuseppe
.
Hey, I have a problem after upgrading to 9.1.5 from 9.0.4 (enterprise) all the dashboards that have tokenlinks.js from "simple_xml_examples" (splunk dashboard examples) app ,the latest version hav... See more...
Hey, I have a problem after upgrading to 9.1.5 from 9.0.4 (enterprise) all the dashboards that have tokenlinks.js from "simple_xml_examples" (splunk dashboard examples) app ,the latest version have the following error and the script don't work : "  A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details.  " in the dev-tool F12 I sew the error comes from common.js : "  Refused to execute script from/en-US/static/@29befd543def.77/js/util/console.js' because its MIME type ('text/html') is not executable, and strict MIME type checking is enabled. common.js:1702 Error: Script error for: util/console http://requirejs.org/docs/errors.html#scripterror      at makeError (eval at e.exports (common.js:502:244924), <anonymous>:166:17)      at HTMLScriptElement.onScriptError (eval at e.exports (common.js:502:244924), <anonymous>:1689:36)  " someone have any idea why or how to fix it? Thanks! Splunk Dashboard Examples Dashboard 
Thank you , Do you have  a general query to calculate the volume ingested for any sourcetype in general?
https://docs.splunk.com/Documentation/Splunk/9.3.1/DashStudio/inputConfig#Manually_refresh_dashboards_with_a_submit_button