All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I want to show which User not logged into Splunk for last 30 or 90days in splunk For example: we have 300 user have access to splunk UI, I want to know who is not logged into splunk more than 7 day... See more...
I want to show which User not logged into Splunk for last 30 or 90days in splunk For example: we have 300 user have access to splunk UI, I want to know who is not logged into splunk more than 7 days  Below query will show who has logged into splunk, but i wanted to show the who is not logged and last login time information. index=_audit sourcetype=audittrail action=success AND info=succeeded | eval secondsSinceLastSeen=now()-_time | eval timeSinceLastSeen=tostring(secondsSinceLastSeen, "duration") | stats count BY user timeSinceLastSeen | append [| rest /services/authentication/users | rename title as user | eval count=0 | fields user ] | stats sum(count) AS total BY user timeSinceLastSeen
I have had a few issues ingesting data into the correct index. We are deploying an app from the deployment server, and this particular app has two clients. Initially, when I set this app up, I was in... See more...
I have had a few issues ingesting data into the correct index. We are deploying an app from the deployment server, and this particular app has two clients. Initially, when I set this app up, I was ingesting data into our o365 index. This data looked somewhat like: We have a team running a script that tracks all deleted files. We were getting in one line per event. And at the time, I had the inputs.conf that looked like: [monitor://F:\scripts\DataDeletion\SplunkReports] index=o365 disabled=false source=DataDeletion It would ingest all CSV files within that DataDeletion Directory. In this case, it ingested everything under that directory. This worked.  I changed the index to testing so i could manage the new data a bit better while we were still testing it. One inputs.conf backup shows that i had this at some point: [monitor://F:\scripts\DataDeletion\SplunkReports\*.csv] index=testing disabled=false sourcetype=DataDeletion crcSalt = <string>   Now months later, I have changed the inputs.conf to ingest everything into the o365 index, and i have applied that change and pushed it out to the class using the Deployment server, and yet the most recent data looks different. The last events we ingested went into the testing index and looked like: This may be due to how the script is sending data into splunk, but it looks like its aggregating hundreds of separate lines into one event. My inputs.conf looks like this currently: [monitor://F:\scripts\DataDeletion\SplunkReports\*] index = o365 disabled = 0 sourcetype = DataDeletion crcSalt = <SOURCE> recursive = true #whitelist = \.csv [monitor://F:\SCRIPTS\DataDeletion\SplunkReports\*] index = o365 disabled = 0 sourcetype = DataDeletion crcSalt = <SOURCE> recursive = true #whitelist = \.csv [monitor://D:\DataDeletion\SplunkReports\*] index = o365 disabled = 0 sourcetype = DataDeletion crcSalt = <SOURCE> recursive = true #whitelist = \.csv   I am just trying to grab everything under D:\DataDeletion\SplunkReports\ on the new windows servers, and ingest all of the csv files under there, breaking up each line in the csv into a new event. What is the proper syntax for this inputs, what am i doing wrong, I have tried a few things and none of them see to work. Ive tried adding a whitelist, adding a blacklist, I have recursive and crcSalt there just to grab anything and everything.  and if the script isnt at fault at sending in chunks of data in one event, would adding a props.conf fix how Splunk is ingesting this data? Thanks for any help. 
You can rewrite any metadata field including source, sourcetype and host using transforms. But, to be honest, I don't understand why you would want to lose information (the actual source file). You ... See more...
You can rewrite any metadata field including source, sourcetype and host using transforms. But, to be honest, I don't understand why you would want to lose information (the actual source file). You can always extract that info in search time if you want just the directory.
Well... there are as many "good" answers as there are admins And each approach has probably its pros and cons. Regardless of the actual upgrade schedule it's important - especially if you have a ... See more...
Well... there are as many "good" answers as there are admins And each approach has probably its pros and cons. Regardless of the actual upgrade schedule it's important - especially if you have a big environment - to not just uncontrollably push a new version everywhere but phase the deployment - first some dev environment, then selected few pilot machines, only then the rest of environment. And be prepared to downgrade in case of problems. And for me it's not as much about actual frequency of updates as much as triggers. If there are some vulerabilities (important to you; not all vulnerabilities are exploitable in all environments) patched with new version - upgrade. If there are new functionalities important to you now or in forseeable future - upgrade. If there are important bug fixes - upgrade. Otherwise - "if it ain't broke don't fix it". Mostly. It's good to stay within a maintained version range - you wouldn't want to use 6.x version nowadays unless you have really no other choice. Of course as @gcusello said - you're limited by what versions are supported by your OS and you can't - for example - install a 9.3UF on a RaspberryPi 2 or Windows 2008 32-bit because there is no such version available for those architectures.
Hi @PiotrAp , if you haven't an intermediate HF, you should upgrade to the last Splunk Cloud Version. If you have an intermediate HF, it must be aligned to the Splunk Cloud version, and UFs to the ... See more...
Hi @PiotrAp , if you haven't an intermediate HF, you should upgrade to the last Splunk Cloud Version. If you have an intermediate HF, it must be aligned to the Splunk Cloud version, and UFs to the HF version. I never use the approach ov n-1 version, I always install the last released version. If you can, it's always better upgrate as soon as the new version is released, but I understand that's not possible in a large infrastructiure, so the frequency of once a year is a  good compromise between costs and update necessity. Ciao. Giuseppe
@vjsplunk- Glad you found solution to your problem. Please accept your own answer by clicking "Accept as Answer" so in future community users get benefited from this.   Community Moderator, Vatsal... See more...
@vjsplunk- Glad you found solution to your problem. Please accept your own answer by clicking "Accept as Answer" so in future community users get benefited from this.   Community Moderator, Vatsal Jagani
Hi Giuseppe Many thanks for your reply. So should I update it once a year? If so, should I install the latest possible version or use something like N-1? How do you do this in your environment? We ... See more...
Hi Giuseppe Many thanks for your reply. So should I update it once a year? If so, should I install the latest possible version or use something like N-1? How do you do this in your environment? We have Splunk Cloud version.  
tried those 2 option already with no good results. thank you.
Hi @sverdhan , you asked for a list of sourcetypes. If you want all the sourcetypes, you could try: index=_internal [ rest splunk_server=local /services/server/info | return host] source=*lic... See more...
Hi @sverdhan , you asked for a list of sourcetypes. If you want all the sourcetypes, you could try: index=_internal [ rest splunk_server=local /services/server/info | return host] source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | timechart span=1d sum(b) AS volumeB by h fixedrange=false | fields - _timediff | foreach "*" [ eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)] that's the one that you can find in the license consuming. Ciao. Giuseppe
.
Hey, I have a problem after upgrading to 9.1.5 from 9.0.4 (enterprise) all the dashboards that have tokenlinks.js from "simple_xml_examples" (splunk dashboard examples) app ,the latest version hav... See more...
Hey, I have a problem after upgrading to 9.1.5 from 9.0.4 (enterprise) all the dashboards that have tokenlinks.js from "simple_xml_examples" (splunk dashboard examples) app ,the latest version have the following error and the script don't work : "  A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details.  " in the dev-tool F12 I sew the error comes from common.js : "  Refused to execute script from/en-US/static/@29befd543def.77/js/util/console.js' because its MIME type ('text/html') is not executable, and strict MIME type checking is enabled. common.js:1702 Error: Script error for: util/console http://requirejs.org/docs/errors.html#scripterror      at makeError (eval at e.exports (common.js:502:244924), <anonymous>:166:17)      at HTMLScriptElement.onScriptError (eval at e.exports (common.js:502:244924), <anonymous>:1689:36)  " someone have any idea why or how to fix it? Thanks! Splunk Dashboard Examples Dashboard 
Thank you , Do you have  a general query to calculate the volume ingested for any sourcetype in general?
https://docs.splunk.com/Documentation/Splunk/9.3.1/DashStudio/inputConfig#Manually_refresh_dashboards_with_a_submit_button
https://community.splunk.com/t5/Getting-Data-In/Adding-a-field-and-changing-source-from-Source/m-p/147386 https://community.splunk.com/t5/Getting-Data-In/How-to-replace-meta-information/m-p/98452 H... See more...
https://community.splunk.com/t5/Getting-Data-In/Adding-a-field-and-changing-source-from-Source/m-p/147386 https://community.splunk.com/t5/Getting-Data-In/How-to-replace-meta-information/m-p/98452 Here are 2 links demonstrating different use cases to replace source values with something for their particular use.  Leveraging rex you can replace your source with the value and match you require.  The process is the same even if the rex is different.
Thanks @dural_yyz  for the answer. Where can I find the Submit button options you mentioned?   I could not find any documentation for that...
Thank you for the reply, Do i have to create a web hook and run script for it or how can i do it? 
Depends on what your SMS provider provides as API hooks.  It's well documented that you can use a Splunk Alert to interact with API's but that needs to be provided by the far end or something you man... See more...
Depends on what your SMS provider provides as API hooks.  It's well documented that you can use a Splunk Alert to interact with API's but that needs to be provided by the far end or something you manually create yourself.
Found the issue. It was because I am not using null values before the transforming command
You could look at using the map command to pass the appcode from the first search to the other searches. However, I would hesitate to recommend this as it has performance and limits implications. At... See more...
You could look at using the map command to pass the appcode from the first search to the other searches. However, I would hesitate to recommend this as it has performance and limits implications. At the end of the day, the sourcetypes may be different in the initial searches (which is why they would be in separate searches which are appended to one another), but by the end they are similar, i.e. an app code and a metric (or two). Having said that, if you wanted to go down this route, you should still look at optimising the combined searches (but they are quite complex for someone who doesn't know your data to figure out what you are ultimately aiming to achieve).
This is the right answer