All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi probably you could try Splunk Workload management for it. At least it works if users try to run queries without index=xyz. See more https://docs.splunk.com/Documentation/Splunk/latest/Workloads/W... See more...
Hi probably you could try Splunk Workload management for it. At least it works if users try to run queries without index=xyz. See more https://docs.splunk.com/Documentation/Splunk/latest/Workloads/WorkloadRules r. Ismo
Hello, is it possible to install SA-cim_vladiator on clustered search heads? Thanks.  
Is there any efficient way to block queries without the sourcetype? Educating users is not working and we wanted to block it so that there is no degradation of the environment
I am attempting to identify when Splunk users are running searches against historic data (over 180 days old). Additionally, as part of the same request, looking to identify where users have recovered... See more...
I am attempting to identify when Splunk users are running searches against historic data (over 180 days old). Additionally, as part of the same request, looking to identify where users have recovered data from DDAA to DDAS to run searches against that. This is to build a greater understanding of how often historic data is accessed to help guide data retention requirements in Splunk Cloud (i.e. is retention set appropriately or can we extend/reduce retention periods based on the frequency of data access).
Hi Manish, In the dockerfile for the API PSA we just added: RUN npm install httpntlm Regards, Roberto
Hi @bmanikya, at first Splunk isn't a database, so avoid to use join as usual for al of us coming from databases! there are other more efficient methods to correlate events from two searches. Anyw... See more...
Hi @bmanikya, at first Splunk isn't a database, so avoid to use join as usual for al of us coming from databases! there are other more efficient methods to correlate events from two searches. Anyway, in your search there's a thing that I don't understand: in the second search you have: | table _time, App2 | search App2=App1 but after the table command, you have only those two fields, so, where do you take the app1 field' Anyway, try to redesign you searches using stats command and the join field as correlation key, something like this: (index=imdc_gold_hadoopmon_metrics sourcetype=hadoop_resourcemanager "Allocated new applicationId") OR (index=imdc_gold_hadoopmon_metrics sourcetype=hadoop_resourcemanager "OPERATION=Submit Application Request") | rex "^(?:[^ \n]* ){4}(?P<App1>.+)" | rex "^(?:[^=\n]*=){6}\w+_\d+_(?P<App2>.+)" | eval Time1=if(searchmatch("Allocated new applicationId"),strftime(_time,"%Y-%m-%d %H:%M"),""), Time2=if(searchmatch("OPERATION=Submit Application Request"),strftime(_time,"%Y-%m-%d %H:%M"),""), app=coalesce(app1,app2) | stats values(Time1) AS Time1 values(Time2) AS Time2 BY app | table Time1, App1, Time2, App2 Ciao. Giuseppe
Ideally, you should rewrite your search to avoid using joins as they are slow. If you want to continue with joins, you subsearch should have the same field name as the joining field. The subsearch e... See more...
Ideally, you should rewrite your search to avoid using joins as they are slow. If you want to continue with joins, you subsearch should have the same field name as the joining field. The subsearch executes before the main search so in your example App1 is not known in the subsearch (in fact, none of the fields from the main search are available to the subsearch in the join). Try something like this index=imdc_gold_hadoopmon_metrics sourcetype=hadoop_resourcemanager "Allocated new applicationId" | rex field=_raw "^(?:[^ \n]* ){4}(?P<App1>.+)" | eval _time=strftime(_time,"%Y-%m-%d %H:%M") | table _time, App1 | rename _time as Time1 | join type=inner App1 [ search index=imdc_gold_hadoopmon_metrics sourcetype=hadoop_resourcemanager "OPERATION=Submit Application Request" | rex field=_raw "^(?:[^=\n]*=){6}\w+_\d+_(?P<App1>.+)" | eval _time=strftime(_time,"%Y-%m-%d %H:%M") | rename _time as Time2 | table Time2, App1] | table Time1, App1, Time2
Hi Team, Looking for help on configuring the statuspage.io addon to ingest incidents/Collect all scheduled maintenance from statuspage.io.  
Search Query 1   Search Query 2 Would like to join search query 1 and 2 and get the results, but no results found. index=imdc_gold_hadoopmon_metrics sourcetype=hadoop_resourcemanager "All... See more...
Search Query 1   Search Query 2 Would like to join search query 1 and 2 and get the results, but no results found. index=imdc_gold_hadoopmon_metrics sourcetype=hadoop_resourcemanager "Allocated new applicationId" | rex field=_raw "^(?:[^ \n]* ){4}(?P<App1>.+)" | eval _time=strftime(_time,"%Y-%m-%d %H:%M") | table _time, App1 | rename _time as Time1 | join type=inner App1 [ search index=imdc_gold_hadoopmon_metrics sourcetype=hadoop_resourcemanager "OPERATION=Submit Application Request" | rex field=_raw "^(?:[^=\n]*=){6}\w+_\d+_(?P<App2>.+)" | eval _time=strftime(_time,"%Y-%m-%d %H:%M") | table _time, App2 | search App2=App1 | rename _time as Time2] | table Time1, App1, Time2, App2  
Thanks for the info shared able to fetch the results.....   I have another requirement like, I want to show an bar chart which should show the total login count in basis of the time period we submi... See more...
Thanks for the info shared able to fetch the results.....   I have another requirement like, I want to show an bar chart which should show the total login count in basis of the time period we submit   for example if we select 2 days it should show the bar chart where y is for login count and x is for time slection (in basis of day interval like 6thfeb  7th feb like this)
Hi @danielbb  File and URL These correspond to the artifact creation flow on the investigation workbench. Instead of creating a file or URL artifact on the workbench by hand, you can specify whi... See more...
Hi @danielbb  File and URL These correspond to the artifact creation flow on the investigation workbench. Instead of creating a file or URL artifact on the workbench by hand, you can specify which fields should be used to create artifacts automatically when you add a notable to the investigation workbench.   More details here: https://docs.splunk.com/Documentation/ES/latest/Admin/Customizeinvestigations#Set_up_artifact_extraction_for_investigation_of_notable_events
Good morning, Let me tell you about my situation. We have a forwarder inside a Docker container python:3.11-slim-bullseye. We've noticed that when we deploy an application from the deployment server... See more...
Good morning, Let me tell you about my situation. We have a forwarder inside a Docker container python:3.11-slim-bullseye. We've noticed that when we deploy an application from the deployment server to the forwarder by adding a stanza to the inputs.conf file, the forwarder's ExecProcessor doesn't detect the change. Could you please help me understand why? Thank you very much, regards.
Hi All, How we can modify the below search to get to see only the status enabled list of correlation searches which did not trigger a notable in past X days. | rest /services/saved/searches | sear... See more...
Hi All, How we can modify the below search to get to see only the status enabled list of correlation searches which did not trigger a notable in past X days. | rest /services/saved/searches | search title="*Rule" action.notable=1 | fields title | eval has_triggered_notables = "false" | join type=outer title [ search index=notable search_name="*Rule" orig_action_name=notable | stats count by search_name | fields - count | rename search_name as title | eval has_triggered_notables = "true" ] Thanks..  
Good morning, Let me tell you about my case. In my company, we have five indexers, one for development and the other four for production. We have an inputs.conf in a forwarder inside a Docker contai... See more...
Good morning, Let me tell you about my case. In my company, we have five indexers, one for development and the other four for production. We have an inputs.conf in a forwarder inside a Docker container python:3.11-slim-bullseye that has three stanzas that execute a script with arguments. One stanza sends the data to development and runs every two minutes, and the other two send the data to production, one running every minute and the other every two minutes. We have noticed that during a period of time last night, we did not receive any data from the forwarder. Regarding the development stanza, it's correct as the machine was being patched, and Splunk was stopped just during that period. We have observed that during those hours, the forwarder did not execute any scripts. During that time frame, we found these traces in the watchdog.log file of the forwarder: 02-05-2024 20:02:18.220 +0000 ERROR Watchdog - No response received from IMonitoredThread=0x7fabb87fec60 within 8000 ms. Looks like thread name='ExecProcessor' tid=1937852 is busy !? Starting to trace with 8000 ms interval.   Could you please help me understand why the forwarder did not execute any scripts during that time frame? Thank you very much. Best regards.
Hi @gitingua, in classic dashboards it isn't possible, you have to use Dashbords Studio for this. Ciao. Giuseppe
Hi @anissabnk, don't use date_month_year, but date_year_month., then add month_number | eval month=strftime(_time,"%m") | eval date_year_month=date_year." - ".month." - ".date_month in this way, ... See more...
Hi @anissabnk, don't use date_month_year, but date_year_month., then add month_number | eval month=strftime(_time,"%m") | eval date_year_month=date_year." - ".month." - ".date_month in this way, you can sort the results by date_year_month. then at the end of the search, you can remove the month number in this way: | eval date_year_month=substr(date_year_month,1,7).substr(date_year_month,12,15) eventually check if the substrings are correct! Ciao.
Colleagues. Hi all !! Can you give me some advice on editing dashboards? I have 4 static tables And I need to arrange them so that the first three are on the left and go in order, and stretch the ... See more...
Colleagues. Hi all !! Can you give me some advice on editing dashboards? I have 4 static tables And I need to arrange them so that the first three are on the left and go in order, and stretch the right so that it is large and long and there is no empty space. I tried to play around with xml somehow, but to no avail. The xml itself is in the file. If this can be done at all, if not. So sorry for such a question! Thanks to all!     <form> <label>Testimg</label> <row> <panel depends="$alwaysHideCSS$"> <title>Настройка по ширине</title> <html> <style> #test_1{ width:50% !important; } #test_2{ width:50% !important; } #test_3{ width:50% !important; } #test_4{ width:50% !important; } </style> </html> </panel> </row> <row> <panel id="test_1"> <title>Table 1</title> <table> <search> <query>| makeresults count=10 | eval no=5 | table no</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel id="test_2"> <title>Table 2</title> <table> <search> <query>| makeresults count=10 | eval no=6 | table no</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel id="test_3"> <title>Table 4</title> <table> <search> <query>| makeresults count=10 | eval no=20 | table no</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> <row> <panel id="test_4"> <title>Table 3</title> <table> <search> <query>| makeresults count=10 | eval no=7 | table no</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> </form>
Ok for me, but How to have the result in the chronological order.  For now, I have this result, but not yet in a chronological order.  I have modified my spl request like this :  | union maxti... See more...
Ok for me, but How to have the result in the chronological order.  For now, I have this result, but not yet in a chronological order.  I have modified my spl request like this :  | union maxtime=500 timeout=500 [ search index="idx_arv_ach_appels_traites" AND (activite_cuid="N1 FULL" OR activite="FULL AC") AND (NOT activite_cuid!="N1 FULL" AND NOT activite!="FULL AC")|eval date_month_year=date_month." ".date_year| stats sum(appels_traites) as "Nbre appels" by date_month_year] [ search index="idx_arv_ach_tracage" (equipe_travail_libelle="*MAROC N1 AC FULL") | eval date=strftime(_time,"%Y-%m-%d") | eval date_month_year=date_month." ".date_year | dedup date,code_alliance_conseiller_responsable,num_client | chart count(theme_libelle) as "Nbre_de_tracagesD" by date_month_year] [search index="idx_arv_ach_cas_traces" source="*ac_at*" (cuid!="AUTOCPAD" AND cuid!="BTORCPAD" AND cuid!="COCOA01" AND cuid!="CRISTORC" AND cuid!="ECARE" AND cuid!="FACADE" AND cuid!="IODA" AND cuid!="MEFIN" AND cuid!="ND" AND cuid!="ORCIP" AND cuid!="ORDRAGEN" AND cuid!="PORTAIL USSD" AND cuid!="RECOU01" AND cuid!="SGZF0000" AND cuid!="SVI" AND cuid!="USAGER PURGE" AND cuid!="VAL01") (LibEDO="*MAROC N1 AC FULL"OR"*SENEGAL N1 AC FULL*") AND (code_resolution="474"OR"836"OR"2836"OR"2893"OR"3085"OR"3137"OR"3244"OR"4340"OR"4365"OR"4784"OR"5893"OR"5896"OR"5897"OR"5901"OR"5909"OR"5914"OR"6744"OR"7150"OR"8020"OR"8531"OR"8534"OR"8535"OR"8548"OR"8549"OR"8709"OR"8876"OR"8917"OR"8919"OR"8946"OR"8961"OR"8962"OR"8970"OR"8974"OR"8998"OR"8999"OR"9000"OR"9001"OR"9004"OR"9006"OR"9007"OR"9010"OR"9011"OR"9012"OR"9048"OR"9052"OR"9058"OR"9059"OR"9069"OR"9088"OR"9089"OR"9090"OR"9095"OR"9107"OR"9108"OR"9116"OR"9148"OR"9150"OR"9169"OR"9184"OR"9190"OR"9207"OR"9208"OR"9209"OR"9211"OR"9214"OR"9223"OR"9239"OR"9240"OR"9241"OR"9248"OR"9251"OR"9274"OR"92752"OR"9276"OR"9288"OR"9299"OR"9300"OR"9302"OR"9323"OR"9324"OR"9366"OR"9382"OR"9385"OR"9447"OR"9450"OR"9455"OR"9466"OR"9467"OR"9476"OR"9516"OR"9559"OR"9584"OR"9603"OR"9627"OR"9633"OR"9640"OR"9654"OR"9670"OR"9710"OR"9735"OR"9740"OR"9782"OR"9784"OR"9785"OR"9786"OR"9794"OR"9817"OR"9839"OR"9919"OR"9932"OR"10000"OR"10010"OR"10017"OR"10022"OR"10048"OR"10049"OR"10053"OR"10081"OR"10099"OR"10100"OR"10103"OR"10104"OR"10105"OR"10116"OR"10118"OR"10142"OR"10143"OR"10153"OR"10160"OR"10162"OR"10165"OR"10185"OR"10189"OR"10190"OR"10191"OR"10199"OR"10206"OR"10209"OR"10216"OR"10229"OR"10233"OR"10241"OR"10256"OR"10278"OR"10280"OR"10288"OR"10289"OR"10290"OR"10299"OR"10330"OR"10331"OR"10367"OR"10432"OR"10474"OR"10496"OR"10499"OR"10506"OR"10524"OR"10525"OR"10526"OR"10527"OR"10528"OR"10530"OR"10531"OR"10532"OR"10534"OR"10535"OR"10536"OR"10537"OR"10538"OR"10540"OR"10541"OR"10543"OR"10557"OR"10558"OR"10560"OR"10561"OR"10579"OR"10592"OR"10675"OR"10676"OR"10677"OR"10678"OR"10680"OR"10681"OR"10704"OR"10748"OR"10759"OR"10760"OR"10764"OR"10766"OR"10768"OR"10769"OR"10770"OR"10771"OR"10783"OR"10798"OR"10799"OR"10832"OR"10857"OR"10862"OR"10875"OR"10928"OR"10929"OR"10933"OR"10934"OR"10941"OR"10947"OR"10962"OR"10966"OR"10969"OR"10977"OR"10978"OR"11017"OR"11085"OR"11114"OR"11115"OR"11116"OR"11138"OR"11139"OR"11140"OR"11141"OR"11142"OR"11143"OR"11144"OR"11219"OR"11252"OR"11239"OR"11268"OR"11326"OR"11327"OR"11328"OR"11329"OR"11410"OR"11514"OR"11552"OR"11992"OR"12012"OR"12032"OR"12033"OR"12034"OR"12035"OR"12036"OR"12037"OR"12038"OR"12039"OR"12040"OR"12041"OR"12152") |eval date_month_year=date_month." ".date_year | chart sum(total) as "Nbre_de_tracagesB" by date_month_year] [search index="idx_arv_ach_cas_traces" source="*ac_at*" (cuid!="AUTOCPAD" AND cuid!="BTORCPAD" AND cuid!="COCOA01" AND cuid!="CRISTORC" AND cuid!="ECARE" AND cuid!="FACADE" AND cuid!="IODA" AND cuid!="MEFIN" AND cuid!="ND" AND cuid!="ORCIP" AND cuid!="ORDRAGEN" AND cuid!="PORTAIL USSD" AND cuid!="RECOU01" AND cuid!="SGZF0000" AND cuid!="SVI" AND cuid!="USAGER PURGE" AND cuid!="VAL01") (LibEDO="*MAROC N1 AC FULL"OR"*SENEGAL N1 AC FULL*") |eval date_month_year=date_month." ".date_year| chart sum(total) as "Nbre_de_tracages_total" by date_month_year] [search index="idx_arv_ach_cas_traces" source="*ac_at*" (cuid!="AUTOCPAD" AND cuid!="BTORCPAD" AND cuid!="COCOA01" AND cuid!="CRISTORC" AND cuid!="ECARE" AND cuid!="FACADE" AND cuid!="IODA" AND cuid!="MEFIN" AND cuid!="ND" AND cuid!="ORCIP" AND cuid!="ORDRAGEN" AND cuid!="PORTAIL USSD" AND cuid!="RECOU01" AND cuid!="SGZF0000" AND cuid!="SVI" AND cuid!="USAGER PURGE" AND cuid!="VAL01") (LibEDO="*MAROC N1 AC FULL"OR"*SENEGAL N1 AC FULL*")| chart sum(total) as "Nbre_de_tracages_total" by date_month_year] [search index="idx_arv_ach_enquetes_satisfaction" source="*maroc_base_1*" (conseiller_createur_equipe="*MAROC N1 AC FULL") |eval date_month_year=date_month." ".date_year | chart count(appreciation) as "Nbre_de_retour_enquete" by date_month_year] [search index="idx_arv_ach_enquetes_satisfaction" source="*maroc_base_1*" (conseiller_createur_equipe="*MAROC N1 AC FULL") |eval date_month_year=date_month." ".date_year | eval nb5=case(appreciation="5", 5) | eval nb123=case(appreciation>="1" and appreciation<="3", 3) | eval nb1234=case(appreciation>="1" and appreciation<="4", 4) | eval nbtotal=case(appreciation>="1" and appreciation<="5", 5)| stats count(nb5) as "Nbre_de_5", count(nb123) as "Nbre_de_123", count(nb1234) as "Nbre_de_1234", count(nbtotal) as "Nbre_total" by date_month _year | eval pourcentage=round((Nbre_de_5/Nbre_total-(Nbre_de_123/Nbre_total))*100,2)." %" | rename pourcentage as deltaSAT | table deltaSAT date_month] | stats values("Nbre appels") as "Nbre appels" values("Nbre_de_retour_enquete") as "Nbre_de_retour_enquete" values(Nbre_de_tracagesD) as Nbre_de_tracagesD values("Nbre_de_tracagesB") as "Nbre_de_tracagesB" values("Nbre_de_tracages_total") as "Nbre_de_tracages_total" values(deltaSAT) as deltaSAT by date_month_year | eval pourcentage=round((Nbre_de_tracagesB/Nbre_de_tracages_total)*100, 2)." %" | rename pourcentage as "Tx traçages bloquants" | eval TxTracage=round((Nbre_de_tracagesD/'Nbre appels')*100,2)." %" | rename TxTracage as "Tx traçages dédoublonnés" | rename Nbre_de_tracagesD as "Nbre traçages dédoublonnés" |sort  date_month_year I tried to sort but it doesn't work.    Thank you 
| appendpipe [ | fields x y z | outputlookup lookup ]
Hi @anandhalagaras1, this is the condition to identify index creation events: index=_internal NOT StreamedSearch IndexWriter Initializing For index deletion, you could use: index=_internal NOT ... See more...
Hi @anandhalagaras1, this is the condition to identify index creation events: index=_internal NOT StreamedSearch IndexWriter Initializing For index deletion, you could use: index=_internal NOT StreamedSearch event=removeIndex action=deleteIndexRequest then in both cases, you can define the fields that you want to display. Ciao. Giuseppe