All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@Santosh2   What you mentioned in the cron job, that is wrong: 0 0-21, 7-23 5-9 3 1-7 Reference:   Cron jobs : ############# * * * * * Each * denotes each value.  1. Minute 2. Hour 3. Day ... See more...
@Santosh2   What you mentioned in the cron job, that is wrong: 0 0-21, 7-23 5-9 3 1-7 Reference:   Cron jobs : ############# * * * * * Each * denotes each value.  1. Minute 2. Hour 3. Day 4. Month 5. Weekday Example: If you want to run a report for everyhour from morning 8'o clock to evening 8'o clock ? Ans : 00 08-20 * * *
Hi, we are using ITSI Service map/Service Analyzer to monitor services.  we have an use case where for same service we need to add multiple KPI and those KPI depends on different entities. For Ex... See more...
Hi, we are using ITSI Service map/Service Analyzer to monitor services.  we have an use case where for same service we need to add multiple KPI and those KPI depends on different entities. For Example: We have Infrastructure related KPI which uses host as entity, another KPI is "service Up" which basically check service is up and in this case entity is "process name".  Also have KPI for Garbage collection which also has different entity. Question: I am trying to understand which is the best way to handle such scenario. where we can add all these KPI without making service map too complex.  
@Santosh2 Modify the remaining (month,weekday,weekend etc) as per your requirement. 
Hello, Do you have experience with Splunk - Rocky Linux since that? We should migrate our Centos7 soon and one of the candidates is Rocky 9. But the system requirements page https://docs.splunk.com... See more...
Hello, Do you have experience with Splunk - Rocky Linux since that? We should migrate our Centos7 soon and one of the candidates is Rocky 9. But the system requirements page https://docs.splunk.com/Documentation/Splunk/9.2.0/Installation/Systemrequirements does not list its kernel version (5.14) anymore.  (same for RHEL)  I believe it will work, but since I need to migrate a physical production server, I want to reduce the risk as much as I can...
I did it it’s saying it won’t run from 9pm to 7 am but it’s not working 
@Santosh2 As per your query, "my alert should not trigger between 9pm to 7am I used below corn job but I am receiving alerts after 9pm".  So, It means the Cron job should run between 7AM to 9 PM cor... See more...
@Santosh2 As per your query, "my alert should not trigger between 9pm to 7am I used below corn job but I am receiving alerts after 9pm".  So, It means the Cron job should run between 7AM to 9 PM correct? In this case, you can try this.  * 7-21 * * * Hour: 7-21 (Every hour from 7 AM to 9 PM)  
Assuming you mean cron not corn, try checking your expression with something like Crontab.guru - The cron schedule expression generator
Dear splunk user, using this sample data [{"Field 859": "Value aaaaa", "Field 2": "Value bbbbb"}, {"Field 1": "Value ccccc", "Field 2": "Value ddddd"}, {"Field 1": "Value eeeee", "Field 2": "Value ... See more...
Dear splunk user, using this sample data [{"Field 859": "Value aaaaa", "Field 2": "Value bbbbb"}, {"Field 1": "Value ccccc", "Field 2": "Value ddddd"}, {"Field 1": "Value eeeee", "Field 2": "Value fffff"}] [{"Field 759:" "Value ggggg", "Field 2": "Value hhhhh"}, {"Field 1": "Value iiiii", "Field 2": "Value jjjjj"}, {"Field 1": "Value kkkkk", "Field 2": "Value lllll"}] with this props.conf [trbndrw_temp] DATETIME_CONFIG = CURRENT SHOULD_LINEMERGE = false LINE_BREAKER = (?:\}(\s*,\s*)\{)|(\][\r\n]+\[) TRANSFORMS-getrid = getridht and this transforms.conf [getridht] INGEST_EVAL = _raw=replace(_raw, "(\[|\])","") you may be able to achieve what you want Happy splunking Luca (aka "one DASH is always better")
I want to pass dynamic value from my search result into email alert subject. I try $result.fieldname$ but it not coming up in the email alert  can someone help me? Thanks
WARNING! This answer is wrong.  The date of this file will be the date of the file when it was packaged in the installer (tgz/rpm).
Hi all, I set a corn job on alert my alert should not trigger between 9pm to 7am I used below corn job but I am receiving alerts after 9pm 0 0-21, 7-23 5-9 3 1-7 is this corn job correct?? Do I ne... See more...
Hi all, I set a corn job on alert my alert should not trigger between 9pm to 7am I used below corn job but I am receiving alerts after 9pm 0 0-21, 7-23 5-9 3 1-7 is this corn job correct?? Do I need to do any changes????
@burwell wrote: How about something like this   | makeresults | eval start= strptime("02-01-2024", "%m-%d-%Y") | eval today=now() | eval time_difference=floor((today-start)/(60*60*24)) | e... See more...
@burwell wrote: How about something like this   | makeresults | eval start= strptime("02-01-2024", "%m-%d-%Y") | eval today=now() | eval time_difference=floor((today-start)/(60*60*24)) | eval mod_val=time_difference % 28 | eval days_to_patch=28-mod_val     Thank you, I think this does exactly what I need! Greatly appreciated! 
Hello everyone,  I followed steps to install DSDL : https://docs.splunk.com/Documentation/DSDL/5.1.1/User/InstallDSDL and do this scenario here https://www.sidechannel.blog/en/detecting-anomalies-us... See more...
Hello everyone,  I followed steps to install DSDL : https://docs.splunk.com/Documentation/DSDL/5.1.1/User/InstallDSDL and do this scenario here https://www.sidechannel.blog/en/detecting-anomalies-using-machine-learning-on-splunk/ But when I'm trying to start the container :    I get 403 error :  I checked roles, capabilities  I checked all kind of posts from the community I checked global permissions on DSDL  Is there a known issue about that ?  Have a good day all, Betty
Thanks for the reply.  I was able to resolve the issue today.  The issue was not load.  As mentioned, there are only a handful of files that actually match the criteria and they see infrequent update... See more...
Thanks for the reply.  I was able to resolve the issue today.  The issue was not load.  As mentioned, there are only a handful of files that actually match the criteria and they see infrequent updates - under 10 new entries per minute I would estimate. The issue ended up being the stanza itself being too vague.  I'm not sure how Splunk monitors/parses these internally but I believe there is room for improvement.  Once I split out the stanzas into a couple that cover the majority of our use-cases the memory dropped back down to around 200MB.   The new stanzas for anyone else finding this in the future eliminated the `...` wildcard and replaced it with a single-level one by including a couple stanzas for the common places these logs would be found: [monitor:///var/www/*/storage/logs/laravel*.log] index = lh-linux sourcetype = laravel_log disabled = 0 [monitor:///var/www/*/shared/storage/logs/laravel*.log] index = lh-linux sourcetype = laravel_log disabled = 0   Thank you kindly for taking the time to answer, there are some pieces of advice you mentioned that I wasn't as familiar with and it was good to learn more.
Hi, I'd lilke to create a detailed report with info including the type of forwarder, the average KB/s, the OS, the IP, the splunk version but also with information to which exact index the forwarder... See more...
Hi, I'd lilke to create a detailed report with info including the type of forwarder, the average KB/s, the OS, the IP, the splunk version but also with information to which exact index the forwarder forwards to.  Is it possible to recreate the search from the monitoring console for forwarder instance and use it somehow to connect it to each index?  `dmc_get_forwarder_tcpin` hostname=* | eval source_uri = hostname.":".sourcePort | eval dest_uri = host.":".destPort | eval connection = source_uri."->".dest_uri | stats values(fwdType) as fwdType, values(sourceIp) as sourceIp, latest(version) as version, values(os) as os, values(arch) as arch, dc(dest_uri) as dest_count, dc(connection) as connection_count, avg(tcp_KBps) as avg_tcp_kbps, avg(tcp_eps) as avg_tcp_eps by hostname, guid | eval avg_tcp_kbps = round(avg_tcp_kbps, 2) | eval avg_tcp_eps = round(avg_tcp_eps, 2) | `dmc_rename_forwarder_type(fwdType)` | rename hostname as Instance, fwdType as "Forwarder Type", sourceIp as IP, version as "Splunk Version", os as OS, arch as Architecture, guid as GUID, dest_count as "Receiver Count", connection_count as "Connection Count", avg_tcp_kbps as "Average KB/s", avg_tcp_eps as "Average Events/s"   And probably somehow join it with  | tstats count values(host) AS host WHERE index=* BY index   The issue I see is that it searches dmc_get_forwarder_tcpin which is equal to index=_internal sourcetype=splunkd group=tcpin_connections (connectionType=cooked OR connectionType=cookedSSL) fwdType=* guid=* and I cannot find the indexes there. How can i connect it to each index?
I finally found the way.  To obtain the ID, it is required to launch the "run query" action first. In the action fields, set the email address in the email field and the clean Message ID in the query... See more...
I finally found the way.  To obtain the ID, it is required to launch the "run query" action first. In the action fields, set the email address in the email field and the clean Message ID in the query field. Do not select any other option, nor fill any other field.    In the response you should see another ID base64 like format. This is the ID used to operate emails. Keep in mind that this ID changes everytime you perform any action over the email (moving it to a different folder for instance). Hope this helps.  
Do you have a specific example? I'm looking through the _configtracker index and not seeing any relevant info for savedsearches.conf changes.
Further to @gcusello 's response, the chart command has only three dimensions, the over field, the by field and the aggregation function. Although, strictly speaking, this can be extended by multiple... See more...
Further to @gcusello 's response, the chart command has only three dimensions, the over field, the by field and the aggregation function. Although, strictly speaking, this can be extended by multiple aggregation functions, but you end up with composite column names. As already suggested, concatenating fields is one way to get around this. Another way is perhaps more tricky and involves using the stats command instead and then messing around with the fields to get the by field values represented as field/column names.
Okay, that is what I thought.    Thank you so much for conforming! 
Hi @sumarri, for my knowledge, the only way is concatenating the fields in one field and use it for the chart command. Ciao. Giuseppe