All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I have the following issue: Have many events with different document_number+datetime_type, which have a field (started_on). There is always 4 different types / document_number. Then 4 new tim... See more...
Hi, I have the following issue: Have many events with different document_number+datetime_type, which have a field (started_on). There is always 4 different types / document_number. Then 4 new timestamp fields are evaluated by the type and the timestamp, so each event will have 1 new filled timestamp in a different field. Now I need to fill the empty ones from the evaluated ones for the same document_number. With streamstats I was able to fill them further (after found), but not backwards. Is it possible somehow? Or only if I do | reverse and apply streamstats again?
Hi @ITWhisperer     I'm just passing the token in link $office_filter$ <link target="_blank">/app/SAsh/details?form.compliance_filter=$click.value$&amp;form.timerange=$timerange$&amp;form.an... See more...
Hi @ITWhisperer     I'm just passing the token in link $office_filter$ <link target="_blank">/app/SAsh/details?form.compliance_filter=$click.value$&amp;form.timerange=$timerange$&amp;form.antivirus_filter=*&amp;$office_filter$&amp;form.machine=$machine$&amp;form.origin=$origin$&amp;form.scope=$scope$</link>
This is not working, the second search has one field StatusDescription, i want to add this using common field Name and host in 1st search 1st search: ```Table on Dashboard = M3_PROD_splunk__age... See more...
This is not working, the second search has one field StatusDescription, i want to add this using common field Name and host in 1st search 1st search: ```Table on Dashboard = M3_PROD_splunk__agent__universal_forwarder_status_is_down``` index=_internal sourcetype=splunkd source="/opt/splunk/var/log/splunk/metrics.log" group=tcpin_connections os=Windows | dedup hostname | eval age=(now()-_time) | eval LastActiveTime=strftime(_time,"%y/%m/%d %H:%M:%S") | eval Status=if(age< 3600,"Running","DOWN") | rename age AS Age | eval Age=tostring(Age,"duration") | lookup 0010_Solarwinds_Nodes_Export Caption as hostname OUTPUT Application_Primary_Support_Group AS CMDB2_Application_Primary_Support_Group, Application_Primary AS CMDB2_Application_Primary, Support_Group AS CMDB2_Support_Group NodeID AS SW2_NodeID Enriched_SW AS Enriched_SW2 Environment AS CMDB2_Environment | eval Assign_To_Support_Group=if(Assign_To_Support_Group_Tag="CMDB_Support_Group", CMDB2_Support_Group, CMDB2_Application_Primary_Support_Group) | where Status="DOWN" AND NOT isnull(SW2_NodeID) AND (CMDB2_Environment="Production" OR CMDB2_Environment="PRODUCTION") ```| table _time, hostname,sourceIp, Status, LastActiveTime, Age, SW2_NodeID,Assign_To_Support_Group, CMDB2_Support_Group,CMDB2_Environment``` | table _time, hostname,sourceIp, Status, LastActiveTime, Age, Assign_To_Support_Group, CMDB2_Environment 2nd search : index=index_name sourcetype="nodes" | lookup lookupfile1 Name OUTPUTNEW | dedup Caption | table Caption StatusDescription UnManaged UnManageFrom UnManageUntil | search UnManaged=true | eval UnManageUntil = strftime(strptime(UnManageUntil, "%Y-%m-%dT%H:%M:%S.%QZ"), "%Y-%m-%d %H:%M:%S") | eval UnManageFrom = strftime(strptime(UnManageFrom, "%Y-%m-%dT%H:%M:%S.%QZ"), "%Y-%m-%d %H:%M:%S") | eval UnManageUntil = coalesce(UnManageUntil, "NOT SET") ```replaces any null values in the "UnManageUntil" field with NOT SET``` | sort -UnManageFrom ```sorts the events in descending order based on the "UnManageFrom" field```
Try using sed.  | rex mode=sed "s/rawjson=\\\"//"
Hi @isoutamo, The current count is under 150. Thank you  
How are you using the token in the link?
How many (approximately) index sourcetype pairs you have? Only those few or e.g. tens/hundreds?
Perhaps it is this line? | eval hour=strftime(_time, "%H") The _time value here will be the time for the start of the day when the summary index was updated i.e. the hour will always be 00
I have already shared before, events are in HTML.   Disposition: inline Subject: INFO - Services are in Maintenance Mode over 2 hours -- AtWork-CIW-E1 Content-Type: text/html <font size=3 color=bla... See more...
I have already shared before, events are in HTML.   Disposition: inline Subject: INFO - Services are in Maintenance Mode over 2 hours -- AtWork-CIW-E1 Content-Type: text/html <font size=3 color=black>Hi Team,</br></br>Please find below servers which are in maintenance mode for more than 2 hours; </br></br></font> <table border=2> <TR bgcolor=#D6EAF8><TH colspan=2>Cluster Name: AtWork-CIW-E1</TH></TR> <TR bgcolor=#D6EAF8><TH colspan=1>Service</TH><TH colspan=1>Maintenance Start Time in MST</TH></TR><TR bgcolor=#FFB6C1><TH colspan=1>oozie</TH><TH colspan=1>Mon Oct 16 07:29:46 MST 2023</TH></TR> </table> <font size=3 color=black></br>  Please check in Bold characters. I want this in table format  
Hi @bmanikya, could you share more sample logs? because, as you can see in regex101.com, my regex works on the shared sample. Ciao. Giuseppe  
Hi @smanojkumar, please try this regex (10\d)|201|205|(30[1-3]) that you can test at https://regex101.com/r/ujeYQM/1 Ciao. Giuseppe
Hi there!    In inputs.conf whitelist, how do I create a regex expression for whitelisting files which contain a certain number (101-109, 201, 205, 301-303)?
Hi @ITWhisperer ,    It is fine but the prefix "form.office_filter=" is affecting the token that is used in search and the link is not expected. Here is the link &form.office_filter%3DBack%20O... See more...
Hi @ITWhisperer ,    It is fine but the prefix "form.office_filter=" is affecting the token that is used in search and the link is not expected. Here is the link &form.office_filter%3DBack%20Office%26form.office_filter%3DFront%20Office=& If I'm using this instead, it works &form.office_filter=Back%20Office&form.office_filter=Front%20Office=& = is replaced by %3D in first link, & is replaced by %26 Can you  please help me in this! Thanks!  
Hi Team,  I'm using summary index for below requirement : 1. Store daily counts of HTTP_Status_Code per hour for each of the application (app_name) on to daily summary index 2. Once in a week, cal... See more...
Hi Team,  I'm using summary index for below requirement : 1. Store daily counts of HTTP_Status_Code per hour for each of the application (app_name) on to daily summary index 2. Once in a week, calculate the average for each app_name by hour, HTTP_STATUS_CODE for the stored values in daily summary index.  3. This average values will be showed in dashboard widget.  But when I'm trying to calculate avg for the stored values, it isn't working. Below are the steps I'm following: 1. Pushing HTTP_Status_Code, _time,hour, day, app_name, count along with value "Summary_test" (for ease of filtering) to daily index named "summary_index_1d". Note : app_name is a extracted field. There are 25+ different values       index="index" | fields HTTP_STATUS_CODE,app_name | eval HTTP_STATUS_CODE=case(like(HTTP_STATUS_CODE, "2__"),"2xx",like(HTTP_STATUS_CODE, "4__"),"4xx",like(HTTP_STATUS_CODE, "5__"),"5xx") | eval hour=strftime(_time, "%H") | eval day=strftime(_time, "%A") | bin _time span=1d | stats count by HTTP_STATUS_CODE,_time,hour,day,app_name | eval value="Summary_Test" | collect index=summary_index_1d         2. Retrieve data from summary index. its showing up the data pushed       index=summary_index_1d "value=Summary_Test"        3. Now I want to calculate the average for previous 2 or 4 weekday data stored in summary index. I'm using below as reference  https://community.splunk.com/t5/Splunk-Enterprise/How-to-Build-Average-of-Last-4-Monday-Current-day-vs-Today-in-a/m-p/657868/highlight/true#M17385       Trying to perform avg on summary index stored values. But this fails index=summary_index_1d "value=Summary_Test" app_name=abc HTTP_STATUS_CODE=2xx | eval current_day = strftime(now(), "%A") | eval log_day = strftime(_time, "%A") | eval hour=strftime(_time, "%H") | eval day=strftime(_time, "%d")| eval dayOfWeek = strftime(_time, "%u") | where dayOfWeek >=1 AND dayOfWeek <= 5 | stats count as value by hour log_day day | sort log_day, hour | stats avg(value) as average by log_day,hour       I guess the "hour" in the query is creating conflict. I tried without it and also by changing the values, but not returning expected result. When the same query is used on main index, it works perfectly fine for my requirement. But when used on summary index, its not able to calculate the average.        This works fine for the requirement. But when same is applied on "Summary index", it fails index=index app_name=abc | eval HTTP_STATUS_CODE=case(like(status, "2__"),"2xx") | eval current_day = strftime(now(), "%A") | eval log_day = strftime(_time, "%A") | eval hour=strftime(_time, "%H") | eval day=strftime(_time, "%d")| eval dayOfWeek = strftime(_time, "%u") | where dayOfWeek >=1 AND dayOfWeek <= 5 | stats count as value by hour log_day day | sort log_day, hour | stats avg(value) as average by log_day,hour       Can you please help me understand what's wrong with query used on summary index ?  @ITWhisperer @yuanliu @smurf  
Hello @isoutamo, The data is in the lookup file is in the form below, I need to read data from each row and compute the results: - index sourcetype ----------------------- idx1 s1 idx2 s2 i... See more...
Hello @isoutamo, The data is in the lookup file is in the form below, I need to read data from each row and compute the results: - index sourcetype ----------------------- idx1 s1 idx2 s2 idx3 s3 idx1 s4 Now I need to compute and display results of each row by running predict command on each of them.  The base query that I have built for running predict command that will fetch the forecast values for each row: - index=custom_index orig_index=idx1 orig_sourcetype=s1 earliest=-4w@w latest=-2d@d | timechart span=1d avg(event_count) AS avg_event_count | predict avg_event_count | tail 1 | fields prediction(avg_event_count)   Please share if you need any more details from my end. I hope to seek your inputs on solving the problem. Thank you
Task failed: Store encrypted MySQL credentials on disk on host: ESS-LT-68Q9FS3 as user: appd-team-9 with message: Command failed with exit code 1 and stdout Checking if db credential is valid... Mysq... See more...
Task failed: Store encrypted MySQL credentials on disk on host: ESS-LT-68Q9FS3 as user: appd-team-9 with message: Command failed with exit code 1 and stdout Checking if db credential is valid... Mysql returned with error code [127]: /home/appd-team-9/appdynamics/platform/product/controller/db/bin/mysql: error while loading shared libraries: libtinfo.so.5: cannot open shared object file: No such file or directory and stderr .
No
As it solve you problem, please accept it as Solution so other can see it later. Happy Splunking!
excellent, i see it now. works perfect. thanks!
If you have extracted that whole value into some field (e.g. ldap_query) then use it. If that value is still in _raw then you could leave that field=xxxx part away. Just see https://docs.splunk.com/Do... See more...
If you have extracted that whole value into some field (e.g. ldap_query) then use it. If that value is still in _raw then you could leave that field=xxxx part away. Just see https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Rex