All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @fahimeh , are you sure that it's a Splunk issue and not a Windows issue? Anyway, open a case to Splunk Support. Ciao. Giuseppe
Hi @sverdhan , sorry but your requirement isn't so clear: if a sourcetype didn't reported in the last 30 days, how can you calculate their volume? it's always 0 in the last 30 days. Maybe you want... See more...
Hi @sverdhan , sorry but your requirement isn't so clear: if a sourcetype didn't reported in the last 30 days, how can you calculate their volume? it's always 0 in the last 30 days. Maybe you want the logs in the last 6 months, calculating their total volume highlighting if they aren't sending logs from 30 days. in this case, you can apply a solution like the one you shared. Anyway, to calculate volume you have two solution: a more performant (but less precise) solution that uses a medium value (e.g. 1k) for each event. the calculation of volume using the search from license consuming: if the sourcetypes to monitor ar in a lookup called perimeter.csv: | tstats count latest(_time) AS lastTime where index=* [| inputlookup perimeter.csv | fields sourcetype ] earliest=-180d latest=now BY host | eval period=if lastTime>now()-86400*30,"Latest","Previous") | stats sum(count) AS count dc(period) As period_count values(period) AS period BY host | eval status=case(period_count=2,"Always present",period="Latest","Only last Month",period="Previous","Only Previous") | eval volume=count*1/1024/1024 | table host status volume if instead you want a more detailed solutions but very less performant, you could try: index=_internal [ rest splunk_server=local /services/server/info | return host] source=*license_usage.log* type="Usage" [| tstats count latest(_time) AS lastTime where index=* [| inputlookup perimeter.csv | fields sourcetype ] earliest=-180d latest=now BY host | fields host ] | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | timechart span=1d sum(b) AS volumeB by h fixedrange=false | fields - _timediff | foreach "*" [ eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)] Ciao. Giuseppe  
I am using an xyseries command after the filtering commands it is not giving any results.
Hi everyone,   I'm using Splunk SOAR and trying to send HTML emails with detailed information via the SMTP app. I would like to include images in the email and create a well-formatted HTML message ... See more...
Hi everyone,   I'm using Splunk SOAR and trying to send HTML emails with detailed information via the SMTP app. I would like to include images in the email and create a well-formatted HTML message body.   Could someone guide me on how to upload and embed images within the email?   Thanks in advance!
Sorry for the delay Exporting the scan results did provide additional information, as with most other apps the problem is with "backups" of older versions of the app ".../default.old.20240828…i/vie... See more...
Sorry for the delay Exporting the scan results did provide additional information, as with most other apps the problem is with "backups" of older versions of the app ".../default.old.20240828…i/views/attribution.xml" So URA is triggering on "old" folders which are no longer active. The remaining questions hence is "to delete or not to delete"? I know I've participated in these discussions before. For "private" apps I could normally just ignore a specific search path for an app, this is not possible for the "splunk base app". So either I have to ingore the "failing" (false positives) apps completely, or manually delete "old" folders. What is the "best praxis" here?
Hi @yuanliu & @ITWhisperer  & @tscroggins  & @PickleRick  & @dural_yyz ,     Thanks everyone for your time, it works for me. Thanks in Advance!
i have used the below query to get a list of 25 sourcetypes who are not reporting for the last 30 days ...but i need to know the volume of data ingested by them...kindly suggest any ideas or any alte... See more...
i have used the below query to get a list of 25 sourcetypes who are not reporting for the last 30 days ...but i need to know the volume of data ingested by them...kindly suggest any ideas or any alternative methods:   | metadata type=sourcetypes | eval diff=now()-lastTime | where diff > 3600*24*30 | convert ctime(lastTime) | convert ctime(firstTime) | convert ctime(recentTime) | sort -diff
Update: After further testing it does not work. I used  props.conf ------------- [tcp] AND [host:server1] TRANSFORMS-random=filterports And it seems to work as expected. Applies the TRANSFO... See more...
Update: After further testing it does not work. I used  props.conf ------------- [tcp] AND [host:server1] TRANSFORMS-random=filterports And it seems to work as expected. Applies the TRANSFORM on for tcp from server1. Thou, I didn't seem to find any official documentation. It's good if you want to test on a single server before applying to the whole sourcetype.
Hi @ITWhisperer ,    Thanks, It works on that place.
Yes, I've seen the auto-clear setting and activated it. Still, the alert is not triggered. I think, that this kind of alert (or alert condition) is not suited for single-time events like "an error oc... See more...
Yes, I've seen the auto-clear setting and activated it. Still, the alert is not triggered. I think, that this kind of alert (or alert condition) is not suited for single-time events like "an error occurred in a trace", because there is no metric that goes up and down (like CPU usage). This can rather be implemented with Log alerts (in Search & Reporting).  Do you know a different possibility, how one create an alert for single events that occur in Splunk?
Here are the first four, I am sure you can workout from this how to do the other 12 hex digits | eval ASbinary=if(idx < 1, replace(replace(replace(replace(reverse2hex,"0","0000"),"1","0001"),"2","00... See more...
Here are the first four, I am sure you can workout from this how to do the other 12 hex digits | eval ASbinary=if(idx < 1, replace(replace(replace(replace(reverse2hex,"0","0000"),"1","0001"),"2","0010"),"3","0011"), mvmap(reverse2hex, replace(replace(replace(replace(reverse2hex,"0","0000"),"1","0001"),"2","0010"),"3","0011")))
Thank you , that worked for me 
I tried this solution but still facing the same issue
  The error message is generated only for these specific event codes
Hello @ITWhisperer ,    Thanks for your resposne!    If you don't mind changing the code as well. Thansk a lot for your resposne!
Hi, I'm looking for advise how often should I upgrade Splunk Universal Forwarder - what is the best practice for this. In the https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/Admin/Upgrad... See more...
Hi, I'm looking for advise how often should I upgrade Splunk Universal Forwarder - what is the best practice for this. In the https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/Admin/UpgradeyourForwarders  stays: As a best practice, run the most recent forwarder version, even if the forwarder is a higher version number than your Splunk Cloud Platform environment. But is it really good practice to install the latest version? How do you do this in your environment?
I have the same issue and even after enabling the deployment client via cli it still says its disabled. What was the fix for your issue?
Thankyou for your information, maybe i will checking it in latest Sourcetype generate default by splunk yesterday. So i can validating directory paths for inputs.conf
Having converted the number to hex, perform 16 replacements, starting with 0, then 1, replacing the hex digit with the corresponding binary equivalent.
Yes, but in /var/log there are many different kinds of files (and typically even many different kinds of events within some files) and each of them should be parsed differently. If you just ingest al... See more...
Yes, but in /var/log there are many different kinds of files (and typically even many different kinds of events within some files) and each of them should be parsed differently. If you just ingest all of them into one big "sack", you will most definitely lose at least some info (like properly parsed timestamps on some events) and you will not have properly parsed fields for many of those events. So if you have - for example - /var/log/exim/main.log you should ingest it separately with exim_main sourcetyp (and reject.log should have own input stanza with exim_reject sourcetype). Apache httpd access logs should be ingested separately with one of the access_* sourcetypes depending on your apache configuration. And so on. If you just pull everything with one generic sourcetype... well, you can do a full-text search but not much more. You're losing a lot of functionality.