All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @bmanikya, at first Splunk isn't a database, so avoid to use join as usual for al of us coming from databases! there are other more efficient methods to correlate events from two searches. Anyw... See more...
Hi @bmanikya, at first Splunk isn't a database, so avoid to use join as usual for al of us coming from databases! there are other more efficient methods to correlate events from two searches. Anyway, in your search there's a thing that I don't understand: in the second search you have: | table _time, App2 | search App2=App1 but after the table command, you have only those two fields, so, where do you take the app1 field' Anyway, try to redesign you searches using stats command and the join field as correlation key, something like this: (index=imdc_gold_hadoopmon_metrics sourcetype=hadoop_resourcemanager "Allocated new applicationId") OR (index=imdc_gold_hadoopmon_metrics sourcetype=hadoop_resourcemanager "OPERATION=Submit Application Request") | rex "^(?:[^ \n]* ){4}(?P<App1>.+)" | rex "^(?:[^=\n]*=){6}\w+_\d+_(?P<App2>.+)" | eval Time1=if(searchmatch("Allocated new applicationId"),strftime(_time,"%Y-%m-%d %H:%M"),""), Time2=if(searchmatch("OPERATION=Submit Application Request"),strftime(_time,"%Y-%m-%d %H:%M"),""), app=coalesce(app1,app2) | stats values(Time1) AS Time1 values(Time2) AS Time2 BY app | table Time1, App1, Time2, App2 Ciao. Giuseppe
Ideally, you should rewrite your search to avoid using joins as they are slow. If you want to continue with joins, you subsearch should have the same field name as the joining field. The subsearch e... See more...
Ideally, you should rewrite your search to avoid using joins as they are slow. If you want to continue with joins, you subsearch should have the same field name as the joining field. The subsearch executes before the main search so in your example App1 is not known in the subsearch (in fact, none of the fields from the main search are available to the subsearch in the join). Try something like this index=imdc_gold_hadoopmon_metrics sourcetype=hadoop_resourcemanager "Allocated new applicationId" | rex field=_raw "^(?:[^ \n]* ){4}(?P<App1>.+)" | eval _time=strftime(_time,"%Y-%m-%d %H:%M") | table _time, App1 | rename _time as Time1 | join type=inner App1 [ search index=imdc_gold_hadoopmon_metrics sourcetype=hadoop_resourcemanager "OPERATION=Submit Application Request" | rex field=_raw "^(?:[^=\n]*=){6}\w+_\d+_(?P<App1>.+)" | eval _time=strftime(_time,"%Y-%m-%d %H:%M") | rename _time as Time2 | table Time2, App1] | table Time1, App1, Time2
Hi Team, Looking for help on configuring the statuspage.io addon to ingest incidents/Collect all scheduled maintenance from statuspage.io.  
Search Query 1   Search Query 2 Would like to join search query 1 and 2 and get the results, but no results found. index=imdc_gold_hadoopmon_metrics sourcetype=hadoop_resourcemanager "All... See more...
Search Query 1   Search Query 2 Would like to join search query 1 and 2 and get the results, but no results found. index=imdc_gold_hadoopmon_metrics sourcetype=hadoop_resourcemanager "Allocated new applicationId" | rex field=_raw "^(?:[^ \n]* ){4}(?P<App1>.+)" | eval _time=strftime(_time,"%Y-%m-%d %H:%M") | table _time, App1 | rename _time as Time1 | join type=inner App1 [ search index=imdc_gold_hadoopmon_metrics sourcetype=hadoop_resourcemanager "OPERATION=Submit Application Request" | rex field=_raw "^(?:[^=\n]*=){6}\w+_\d+_(?P<App2>.+)" | eval _time=strftime(_time,"%Y-%m-%d %H:%M") | table _time, App2 | search App2=App1 | rename _time as Time2] | table Time1, App1, Time2, App2  
Thanks for the info shared able to fetch the results.....   I have another requirement like, I want to show an bar chart which should show the total login count in basis of the time period we submi... See more...
Thanks for the info shared able to fetch the results.....   I have another requirement like, I want to show an bar chart which should show the total login count in basis of the time period we submit   for example if we select 2 days it should show the bar chart where y is for login count and x is for time slection (in basis of day interval like 6thfeb  7th feb like this)
Hi @danielbb  File and URL These correspond to the artifact creation flow on the investigation workbench. Instead of creating a file or URL artifact on the workbench by hand, you can specify whi... See more...
Hi @danielbb  File and URL These correspond to the artifact creation flow on the investigation workbench. Instead of creating a file or URL artifact on the workbench by hand, you can specify which fields should be used to create artifacts automatically when you add a notable to the investigation workbench.   More details here: https://docs.splunk.com/Documentation/ES/latest/Admin/Customizeinvestigations#Set_up_artifact_extraction_for_investigation_of_notable_events
Good morning, Let me tell you about my situation. We have a forwarder inside a Docker container python:3.11-slim-bullseye. We've noticed that when we deploy an application from the deployment server... See more...
Good morning, Let me tell you about my situation. We have a forwarder inside a Docker container python:3.11-slim-bullseye. We've noticed that when we deploy an application from the deployment server to the forwarder by adding a stanza to the inputs.conf file, the forwarder's ExecProcessor doesn't detect the change. Could you please help me understand why? Thank you very much, regards.
Hi All, How we can modify the below search to get to see only the status enabled list of correlation searches which did not trigger a notable in past X days. | rest /services/saved/searches | sear... See more...
Hi All, How we can modify the below search to get to see only the status enabled list of correlation searches which did not trigger a notable in past X days. | rest /services/saved/searches | search title="*Rule" action.notable=1 | fields title | eval has_triggered_notables = "false" | join type=outer title [ search index=notable search_name="*Rule" orig_action_name=notable | stats count by search_name | fields - count | rename search_name as title | eval has_triggered_notables = "true" ] Thanks..  
Good morning, Let me tell you about my case. In my company, we have five indexers, one for development and the other four for production. We have an inputs.conf in a forwarder inside a Docker contai... See more...
Good morning, Let me tell you about my case. In my company, we have five indexers, one for development and the other four for production. We have an inputs.conf in a forwarder inside a Docker container python:3.11-slim-bullseye that has three stanzas that execute a script with arguments. One stanza sends the data to development and runs every two minutes, and the other two send the data to production, one running every minute and the other every two minutes. We have noticed that during a period of time last night, we did not receive any data from the forwarder. Regarding the development stanza, it's correct as the machine was being patched, and Splunk was stopped just during that period. We have observed that during those hours, the forwarder did not execute any scripts. During that time frame, we found these traces in the watchdog.log file of the forwarder: 02-05-2024 20:02:18.220 +0000 ERROR Watchdog - No response received from IMonitoredThread=0x7fabb87fec60 within 8000 ms. Looks like thread name='ExecProcessor' tid=1937852 is busy !? Starting to trace with 8000 ms interval.   Could you please help me understand why the forwarder did not execute any scripts during that time frame? Thank you very much. Best regards.
Hi @gitingua, in classic dashboards it isn't possible, you have to use Dashbords Studio for this. Ciao. Giuseppe
Hi @anissabnk, don't use date_month_year, but date_year_month., then add month_number | eval month=strftime(_time,"%m") | eval date_year_month=date_year." - ".month." - ".date_month in this way, ... See more...
Hi @anissabnk, don't use date_month_year, but date_year_month., then add month_number | eval month=strftime(_time,"%m") | eval date_year_month=date_year." - ".month." - ".date_month in this way, you can sort the results by date_year_month. then at the end of the search, you can remove the month number in this way: | eval date_year_month=substr(date_year_month,1,7).substr(date_year_month,12,15) eventually check if the substrings are correct! Ciao.
Colleagues. Hi all !! Can you give me some advice on editing dashboards? I have 4 static tables And I need to arrange them so that the first three are on the left and go in order, and stretch the ... See more...
Colleagues. Hi all !! Can you give me some advice on editing dashboards? I have 4 static tables And I need to arrange them so that the first three are on the left and go in order, and stretch the right so that it is large and long and there is no empty space. I tried to play around with xml somehow, but to no avail. The xml itself is in the file. If this can be done at all, if not. So sorry for such a question! Thanks to all!     <form> <label>Testimg</label> <row> <panel depends="$alwaysHideCSS$"> <title>Настройка по ширине</title> <html> <style> #test_1{ width:50% !important; } #test_2{ width:50% !important; } #test_3{ width:50% !important; } #test_4{ width:50% !important; } </style> </html> </panel> </row> <row> <panel id="test_1"> <title>Table 1</title> <table> <search> <query>| makeresults count=10 | eval no=5 | table no</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel id="test_2"> <title>Table 2</title> <table> <search> <query>| makeresults count=10 | eval no=6 | table no</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel id="test_3"> <title>Table 4</title> <table> <search> <query>| makeresults count=10 | eval no=20 | table no</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> <row> <panel id="test_4"> <title>Table 3</title> <table> <search> <query>| makeresults count=10 | eval no=7 | table no</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> </form>
Ok for me, but How to have the result in the chronological order.  For now, I have this result, but not yet in a chronological order.  I have modified my spl request like this :  | union maxti... See more...
Ok for me, but How to have the result in the chronological order.  For now, I have this result, but not yet in a chronological order.  I have modified my spl request like this :  | union maxtime=500 timeout=500 [ search index="idx_arv_ach_appels_traites" AND (activite_cuid="N1 FULL" OR activite="FULL AC") AND (NOT activite_cuid!="N1 FULL" AND NOT activite!="FULL AC")|eval date_month_year=date_month." ".date_year| stats sum(appels_traites) as "Nbre appels" by date_month_year] [ search index="idx_arv_ach_tracage" (equipe_travail_libelle="*MAROC N1 AC FULL") | eval date=strftime(_time,"%Y-%m-%d") | eval date_month_year=date_month." ".date_year | dedup date,code_alliance_conseiller_responsable,num_client | chart count(theme_libelle) as "Nbre_de_tracagesD" by date_month_year] [search index="idx_arv_ach_cas_traces" source="*ac_at*" (cuid!="AUTOCPAD" AND cuid!="BTORCPAD" AND cuid!="COCOA01" AND cuid!="CRISTORC" AND cuid!="ECARE" AND cuid!="FACADE" AND cuid!="IODA" AND cuid!="MEFIN" AND cuid!="ND" AND cuid!="ORCIP" AND cuid!="ORDRAGEN" AND cuid!="PORTAIL USSD" AND cuid!="RECOU01" AND cuid!="SGZF0000" AND cuid!="SVI" AND cuid!="USAGER PURGE" AND cuid!="VAL01") (LibEDO="*MAROC N1 AC FULL"OR"*SENEGAL N1 AC FULL*") AND (code_resolution="474"OR"836"OR"2836"OR"2893"OR"3085"OR"3137"OR"3244"OR"4340"OR"4365"OR"4784"OR"5893"OR"5896"OR"5897"OR"5901"OR"5909"OR"5914"OR"6744"OR"7150"OR"8020"OR"8531"OR"8534"OR"8535"OR"8548"OR"8549"OR"8709"OR"8876"OR"8917"OR"8919"OR"8946"OR"8961"OR"8962"OR"8970"OR"8974"OR"8998"OR"8999"OR"9000"OR"9001"OR"9004"OR"9006"OR"9007"OR"9010"OR"9011"OR"9012"OR"9048"OR"9052"OR"9058"OR"9059"OR"9069"OR"9088"OR"9089"OR"9090"OR"9095"OR"9107"OR"9108"OR"9116"OR"9148"OR"9150"OR"9169"OR"9184"OR"9190"OR"9207"OR"9208"OR"9209"OR"9211"OR"9214"OR"9223"OR"9239"OR"9240"OR"9241"OR"9248"OR"9251"OR"9274"OR"92752"OR"9276"OR"9288"OR"9299"OR"9300"OR"9302"OR"9323"OR"9324"OR"9366"OR"9382"OR"9385"OR"9447"OR"9450"OR"9455"OR"9466"OR"9467"OR"9476"OR"9516"OR"9559"OR"9584"OR"9603"OR"9627"OR"9633"OR"9640"OR"9654"OR"9670"OR"9710"OR"9735"OR"9740"OR"9782"OR"9784"OR"9785"OR"9786"OR"9794"OR"9817"OR"9839"OR"9919"OR"9932"OR"10000"OR"10010"OR"10017"OR"10022"OR"10048"OR"10049"OR"10053"OR"10081"OR"10099"OR"10100"OR"10103"OR"10104"OR"10105"OR"10116"OR"10118"OR"10142"OR"10143"OR"10153"OR"10160"OR"10162"OR"10165"OR"10185"OR"10189"OR"10190"OR"10191"OR"10199"OR"10206"OR"10209"OR"10216"OR"10229"OR"10233"OR"10241"OR"10256"OR"10278"OR"10280"OR"10288"OR"10289"OR"10290"OR"10299"OR"10330"OR"10331"OR"10367"OR"10432"OR"10474"OR"10496"OR"10499"OR"10506"OR"10524"OR"10525"OR"10526"OR"10527"OR"10528"OR"10530"OR"10531"OR"10532"OR"10534"OR"10535"OR"10536"OR"10537"OR"10538"OR"10540"OR"10541"OR"10543"OR"10557"OR"10558"OR"10560"OR"10561"OR"10579"OR"10592"OR"10675"OR"10676"OR"10677"OR"10678"OR"10680"OR"10681"OR"10704"OR"10748"OR"10759"OR"10760"OR"10764"OR"10766"OR"10768"OR"10769"OR"10770"OR"10771"OR"10783"OR"10798"OR"10799"OR"10832"OR"10857"OR"10862"OR"10875"OR"10928"OR"10929"OR"10933"OR"10934"OR"10941"OR"10947"OR"10962"OR"10966"OR"10969"OR"10977"OR"10978"OR"11017"OR"11085"OR"11114"OR"11115"OR"11116"OR"11138"OR"11139"OR"11140"OR"11141"OR"11142"OR"11143"OR"11144"OR"11219"OR"11252"OR"11239"OR"11268"OR"11326"OR"11327"OR"11328"OR"11329"OR"11410"OR"11514"OR"11552"OR"11992"OR"12012"OR"12032"OR"12033"OR"12034"OR"12035"OR"12036"OR"12037"OR"12038"OR"12039"OR"12040"OR"12041"OR"12152") |eval date_month_year=date_month." ".date_year | chart sum(total) as "Nbre_de_tracagesB" by date_month_year] [search index="idx_arv_ach_cas_traces" source="*ac_at*" (cuid!="AUTOCPAD" AND cuid!="BTORCPAD" AND cuid!="COCOA01" AND cuid!="CRISTORC" AND cuid!="ECARE" AND cuid!="FACADE" AND cuid!="IODA" AND cuid!="MEFIN" AND cuid!="ND" AND cuid!="ORCIP" AND cuid!="ORDRAGEN" AND cuid!="PORTAIL USSD" AND cuid!="RECOU01" AND cuid!="SGZF0000" AND cuid!="SVI" AND cuid!="USAGER PURGE" AND cuid!="VAL01") (LibEDO="*MAROC N1 AC FULL"OR"*SENEGAL N1 AC FULL*") |eval date_month_year=date_month." ".date_year| chart sum(total) as "Nbre_de_tracages_total" by date_month_year] [search index="idx_arv_ach_cas_traces" source="*ac_at*" (cuid!="AUTOCPAD" AND cuid!="BTORCPAD" AND cuid!="COCOA01" AND cuid!="CRISTORC" AND cuid!="ECARE" AND cuid!="FACADE" AND cuid!="IODA" AND cuid!="MEFIN" AND cuid!="ND" AND cuid!="ORCIP" AND cuid!="ORDRAGEN" AND cuid!="PORTAIL USSD" AND cuid!="RECOU01" AND cuid!="SGZF0000" AND cuid!="SVI" AND cuid!="USAGER PURGE" AND cuid!="VAL01") (LibEDO="*MAROC N1 AC FULL"OR"*SENEGAL N1 AC FULL*")| chart sum(total) as "Nbre_de_tracages_total" by date_month_year] [search index="idx_arv_ach_enquetes_satisfaction" source="*maroc_base_1*" (conseiller_createur_equipe="*MAROC N1 AC FULL") |eval date_month_year=date_month." ".date_year | chart count(appreciation) as "Nbre_de_retour_enquete" by date_month_year] [search index="idx_arv_ach_enquetes_satisfaction" source="*maroc_base_1*" (conseiller_createur_equipe="*MAROC N1 AC FULL") |eval date_month_year=date_month." ".date_year | eval nb5=case(appreciation="5", 5) | eval nb123=case(appreciation>="1" and appreciation<="3", 3) | eval nb1234=case(appreciation>="1" and appreciation<="4", 4) | eval nbtotal=case(appreciation>="1" and appreciation<="5", 5)| stats count(nb5) as "Nbre_de_5", count(nb123) as "Nbre_de_123", count(nb1234) as "Nbre_de_1234", count(nbtotal) as "Nbre_total" by date_month _year | eval pourcentage=round((Nbre_de_5/Nbre_total-(Nbre_de_123/Nbre_total))*100,2)." %" | rename pourcentage as deltaSAT | table deltaSAT date_month] | stats values("Nbre appels") as "Nbre appels" values("Nbre_de_retour_enquete") as "Nbre_de_retour_enquete" values(Nbre_de_tracagesD) as Nbre_de_tracagesD values("Nbre_de_tracagesB") as "Nbre_de_tracagesB" values("Nbre_de_tracages_total") as "Nbre_de_tracages_total" values(deltaSAT) as deltaSAT by date_month_year | eval pourcentage=round((Nbre_de_tracagesB/Nbre_de_tracages_total)*100, 2)." %" | rename pourcentage as "Tx traçages bloquants" | eval TxTracage=round((Nbre_de_tracagesD/'Nbre appels')*100,2)." %" | rename TxTracage as "Tx traçages dédoublonnés" | rename Nbre_de_tracagesD as "Nbre traçages dédoublonnés" |sort  date_month_year I tried to sort but it doesn't work.    Thank you 
| appendpipe [ | fields x y z | outputlookup lookup ]
Hi @anandhalagaras1, this is the condition to identify index creation events: index=_internal NOT StreamedSearch IndexWriter Initializing For index deletion, you could use: index=_internal NOT ... See more...
Hi @anandhalagaras1, this is the condition to identify index creation events: index=_internal NOT StreamedSearch IndexWriter Initializing For index deletion, you could use: index=_internal NOT StreamedSearch event=removeIndex action=deleteIndexRequest then in both cases, you can define the fields that you want to display. Ciao. Giuseppe
only data with cl1 is getting replaced. I also have data with cl3 which needs to be replaced by ACD85. There is no possibility @ITWhisperer's search should give this half of replacement.  But fi... See more...
only data with cl1 is getting replaced. I also have data with cl3 which needs to be replaced by ACD85. There is no possibility @ITWhisperer's search should give this half of replacement.  But first, your search is very inefficient: The third line starting with search should be accomplished in the first line so fewer events are computed.  Secondly, using regex on rigidly formatted data (CSV) is a waste and prone to errors.  This is what I suggest, using exactly what @ITWhisperer proposed.   index=csv sourcetype="miscprocess:csv" source="D:\\automation\\miscprocess\\output_acd.csv" ("\cl3\" OR "\cl1\") | eval filename = split(_raw, ",") | eval filesize = mvindex(filename, 1), filelocation = mvindex(filename, 2) | eval filename = mvindex(filename, 0) | eval filelocation=if(like(filelocation,"%\cl1%"),"ACD55","ACD85")   Also important: Play with the following emulation and compare with your real data:   | makeresults | fields - _* | eval data=split("012624.1230,13253.10546875,E:\totalview\ftp\acd\cl1\backup_modified\012624.1230 012624.1230,2236.3291015625,E:\totalview\ftp\acd\cl3\backup\012624.1230 012624.1200,13338.828125,E:\totalview\ftp\acd\cl1\backup_modified\012624.1200 012624.1200,2172.1640625,E:\totalview\ftp\acd\cl3\backup\012624.1200 012624.1130,13292.32421875,E:\totalview\ftp\acd\cl1\backup_modified\012624.1130 012624.1130,2231.9658203125,E:\totalview\ftp\acd\cl3\backup\012624.1130 012624.1100,13438.65234375,E:\totalview\ftp\acd\cl1\backup_modified\012624.1100", " ") | mvexpand data | rename data AS _raw | search (\\cl1\\ OR \\cl3\\) ``` the above emulates index=csv sourcetype="miscprocess:csv" source="D:\\automation\\miscprocess\\output_acd.csv" ("\cl3\" OR "\cl1\") ``` | eval filename = split(_raw, ",") | eval filesize = mvindex(filename, 1), filelocation = mvindex(filename, 2) | eval filename = mvindex(filename, 0) | eval filelocation=if(like(filelocation,"%\cl1%"),"ACD55","ACD85")   The output is _raw filelocation filename filesize 012624.1230,13253.10546875,E:\totalview\ftp\acd\cl1\backup_modified\012624.1230 ACD55 012624.1230 13253.10546875 012624.1230,2236.3291015625,E:\totalview\ftp\acd\cl3\backup\012624.1230 ACD85 012624.1230 2236.3291015625 012624.1200,13338.828125,E:\totalview\ftp\acd\cl1\backup_modified\012624.1200 ACD55 012624.1200 13338.828125 012624.1200,2172.1640625,E:\totalview\ftp\acd\cl3\backup\012624.1200 ACD85 012624.1200 2172.1640625 012624.1130,13292.32421875,E:\totalview\ftp\acd\cl1\backup_modified\012624.1130 ACD55 012624.1130 13292.32421875 012624.1130,2231.9658203125,E:\totalview\ftp\acd\cl3\backup\012624.1130 ACD85 012624.1130 2231.9658203125 012624.1100,13438.65234375,E:\totalview\ftp\acd\cl1\backup_modified\012624.1100 ACD55 012624.1100 13438.65234375 As you see, there is no such "partial replacement".  You will need to illustrate and explain any discrepancy between real data and this mock data if you don't get the same results.
I’m reaching you submitting  this community thread   because we are stuck in deployment premium app IT Service intelligence on Splunk Enterprise on Prem. Below troubles we ran into despite  follow... See more...
I’m reaching you submitting  this community thread   because we are stuck in deployment premium app IT Service intelligence on Splunk Enterprise on Prem. Below troubles we ran into despite  following installation steps: •  I stopped splunk service •  I extracted spl ITSI package in according to documentation •  I ran services but splunkd component wasn’t able to activate appserver and so web server  Digging either into web_service.log  or mainly into splunkd.log I‘ve found these entries 01-26-2024 17:26:50.164 +0000 ERROR UiPythonFallback [115369 WebuiStartup] - Couldn't start any appserver processes, UI will probably no t function correctly! 01-26-2024 17:26:50.164 +0000 ERROR UiHttpListener [115369 WebuiStartup] - No app server is running, stop initializing http server. So I proceeded stopping services , uninstalling app components folders  and its indexes storage repositories (  in according to docs) ; then I ran services again and all components including webservice worked fine . We ‘ve deployed Splunk enterprise on ubuntu server ( relative package is splunk-9.1.2-b6b9c8185839-linux-2.6-amd64.deb)  And download ITSI app from its splunkbase link https://splunkbase.splunk.com/app/1841 Could you address with some hints about it ? we 'd try to verify some ITSI features as soon as possible   Thanks in advance and regards   Luigi
Hi Team, Our Splunk is hosted in Cloud. And my requirement is that if an index is getting created then i need to get an alert and similarly if an index is getting deleted from the Search head i need... See more...
Hi Team, Our Splunk is hosted in Cloud. And my requirement is that if an index is getting created then i need to get an alert and similarly if an index is getting deleted from the Search head i need to get an alert. So kindly help with the query.  
Ignore the deleted answer.  When there is a missed space between "Conf" and "-console", I couldn't see the problem you were trying to fix. (It's really important to illustrate data accurately when as... See more...
Ignore the deleted answer.  When there is a missed space between "Conf" and "-console", I couldn't see the problem you were trying to fix. (It's really important to illustrate data accurately when asking data analytics questions.) So, the regex doesn't handle inputs like "-config Conf -console -ntpsync no --check_core no".  In fact, the regex is already too heavy handed.  Instead of adding more regex tax, the following method semantically expresses command line syntax, and does not require composition and decomposition. (Command line allows multiple spaces and such.  But nothing another trim couldn't fix.)   | eval flag = mvindex(split(Aptlauncher_cmd, " -"), 1, -1) | eval flag = trim(flag, "-") | mvexpand flag | eval flag = split(flag, " ") | eval value = mvindex(flag, 1), flag = mvindex(flag, 0) | eval value = if(isnull(flag), null(), coalesce(value, "true"))   The samples (including "-config Conf -console -ntpsync no --check_core no") will give Aptlauncher_cmd flag value launch test -config Conf -console -ntpsync no --check_core no config Conf launch test -config Conf -console -ntpsync no --check_core no console true launch test -config Conf -console -ntpsync no --check_core no ntpsync no launch test -config Conf -console -ntpsync no --check_core no check_core no launch test -config basic_config.cfg -system test_system1 -retry 3 config basic_config.cfg launch test -config basic_config.cfg -system test_system1 -retry 3 system test_system1 launch test -config basic_config.cfg -system test_system1 -retry 3 retry 3 launch test -con-fig advanced-config_v2.cfg -sys_tem test_system_2 -re-try 4 con-fig advanced-config_v2.cfg launch test -con-fig advanced-config_v2.cfg -sys_tem test_system_2 -re-try 4 sys_tem test_system_2 launch test -con-fig advanced-config_v2.cfg -sys_tem test_system_2 -re-try 4 re-try 4 launch update -email user@example.com -domain test.domain.com -port 8080 email user@example.com launch update -email user@example.com -domain test.domain.com -port 8080 domain test.domain.com launch update -email user@example.com -domain test.domain.com -port 8080 port 8080 launch deploy -verbose -dry_run -force verbose true launch deploy -verbose -dry_run -force dry_run true launch deploy -verbose -dry_run -force force true launch schedule -task "Deploy task" -at "2023-07-21 10:00:00" -notify "admin@example.com" task "Deploy launch schedule -task "Deploy task" -at "2023-07-21 10:00:00" -notify "admin@example.com" at "2023-07-21 launch schedule -task "Deploy task" -at "2023-07-21 10:00:00" -notify "admin@example.com" notify "admin@example.com" launch clean -@cleanup -remove_all -v2.5 @cleanup true launch clean -@cleanup -remove_all -v2.5 remove_all true launch clean -@cleanup -remove_all -v2.5 v2.5 true launch start -config@version2 --custom-env DEV-TEST --update-rate@5min config@version2 true launch start -config@version2 --custom-env DEV-TEST --update-rate@5min custom-env DEV-TEST launch start -config@version2 --custom-env DEV-TEST --update-rate@5min update-rate@5min true launch run -env DEV --build-version 1.0.0 -@retry-limit 5 --log-level debug -silent env DEV launch run -env DEV --build-version 1.0.0 -@retry-limit 5 --log-level debug -silent build-version 1.0.0 launch run -env DEV --build-version 1.0.0 -@retry-limit 5 --log-level debug -silent @retry-limit 5 launch run -env DEV --build-version 1.0.0 -@retry-limit 5 --log-level debug -silent log-level debug launch run -env DEV --build-version 1.0.0 -@retry-limit 5 --log-level debug -silent silent true launch execute -file script.sh -next-gen --flag -another-flag value file script.sh launch execute -file script.sh -next-gen --flag -another-flag value next-gen true launch execute -file script.sh -next-gen --flag -another-flag value flag true launch execute -file script.sh -next-gen --flag -another-flag value another-flag value launch execute process_without_any_flags     launch special -@@ -##value special_value --$$$ 100 @@ true launch special -@@ -##value special_value --$$$ 100 ##value special_value launch special -@@ -##value special_value --$$$ 100 $$$ 100 launch calculate -add 5 -subtract 3 --multiply@2.5 --divide@2 add 5 launch calculate -add 5 -subtract 3 --multiply@2.5 --divide@2 subtract 3 launch calculate -add 5 -subtract 3 --multiply@2.5 --divide@2 multiply@2.5 true launch calculate -add 5 -subtract 3 --multiply@2.5 --divide@2 divide@2 true
Hi,  I have a connection on Splunk DB Connect on my HF (connected to my SH and I know connection is stable and other sources reach my SH from the HF)  but data is not populated on my index (I also t... See more...
Hi,  I have a connection on Splunk DB Connect on my HF (connected to my SH and I know connection is stable and other sources reach my SH from the HF)  but data is not populated on my index (I also tried connecting to a new index=database on my SH and HF and restarting and did not work)