All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello guys,  Can you help us with this case, thank you in advance. We received 300k events in 24 hours, we have to process on peak, about 15k in real-time, and this job takes 140 sec to proces... See more...
Hello guys,  Can you help us with this case, thank you in advance. We received 300k events in 24 hours, we have to process on peak, about 15k in real-time, and this job takes 140 sec to process, is it possible to make it take less time ? The application it's already developed, the output should stay the same.  Savedsearches.conf:       [Preatreament - Opération Summary] action.email.show_password = 1 action.logevent = 1 action.logevent.param.event = _time=$result._time$|ABC123456Emetrice=$result.ABC123456Emetrice$|ABC123456Receptrice=$result.ABC123456Receptrice$|ABCaeiou=$result.ABCaeiou$|ABCdonneurbbbb=$result.ABCdonneurbbbb$|AAAAaeiou=$result.AAAAaeiou$|AAAADonneurbbbb=$result.AAAADonneurbbbb$|application=$result.application$|canal=$result.canal$|codeE=$result.codeE$|count=$result.count$|csv=$result.csv$|dateEmissionrrrr=$result.dateEmissionrrrr$|dateReglement=$result.dateReglement$|date_hour=$result.date_hour$|date_mday=$result.date_mday$|date_minute=$result.date_minute$|date_month=$result.date_month$|date_second=$result.date_second$|date_wday=$result.date_wday$|date_year=$result.date_year$|date_zone=$result.date_zone$|deviseOrigine=$result.deviseOrigine$|deviseReglement=$result.deviseReglement$|encryptedAAAAaeiou=$result.encryptedAAAAaeiou$|encryptedAAAADonneurbbbb=$result.encryptedAAAADonneurbbbb$|etat=$result.etat$|eventtype=$result.eventtype$|heureEmissionrrrr=$result.heureEmissionrrrr$|host=$result.host$|identifiantrrrr=$result.identifiantrrrr$|index=$result.index$|info_max_time=$result.info_max_time$|info_min_time=$result.info_min_time$|info_search_time=$result.info_search_time$|lastUpdate=$result.lastUpdate$|libelleRejet=$result.libelleRejet$|linecount=$result.linecount$|montantOrigine=$result.montantOrigine$|montantTransfere=$result.montantTransfere$|motifRejet=$result.motifRejet$|nomaeiou=$result.nomaeiou$|nomDonneurbbbb=$result.nomDonneurbbbb$|orig_index=$result.orig_index$|orig_sourcetype=$result.orig_sourcetype$|phase=$result.phase$|punct=$result.punct$|refEstampillage=$result.refEstampillage$|refFichier=$result.refFichier$|refbbbbClient=$result.refbbbbClient$|refTransaction=$result.refTransaction$|search_name=$result.search_name$|search_now=$result.search_now$|sens=$result.sens$|source=$result.source$|sourcetype=$result.sourcetype$|splunk_server=$result.splunk_server$|splunk_server_group=$result.splunk_server_group$|startDate=$result.startDate$|summaryDate=$result.summaryDate$|timeendpos=$result.timeendpos$|timestamp=$result.timestamp$|timestartpos=$result.timestartpos$|typeOperation=$result.typeOperation$|summaryDate_ms=$result.summaryDate_ms$|UUUUUETR=$result.UUUUUETR$|messageDefinitionIdentifier=$result.messageDefinitionIdentifier$|ssssssInstructionId=$result.ssssssInstructionId$|endToEndIdentification=$result.endToEndIdentification$| action.logevent.param.index = bam_xpto_summary action.logevent.param.sourcetype = Opération_summary action.lookup = 0 action.lookup.append = 1 action.lookup.filename = test.csv alert.digest_mode = 0 alert.severity = 1 alert.suppress = 0 alert.track = 0 counttype = number of events cron_schedule = */1 * * * * dispatch.earliest_time = -6h dispatch.latest_time = now enableSched = 1 quantity = 0 relation = greater than search = (index="bam_xpto" AND sourcetype="Opération") OR (index="bam_xpto_summary" sourcetype="Opération_summary" earliest=-15d latest=now)\ | search [ search index="bam_xpto" AND sourcetype="Opération" \ | streamstats count as id \ | eval splitter=round(id/500) \ | stats values(refEstampillage) as refEstampillage by splitter\ | fields refEstampillage]\ | sort 0 - _time indexTime str(sens)\ | fillnull application phase etat canal motifRejet libelleRejet identifiantrrrr dateReglement ABCdonneurbbbb nomDonneurbbbb ABCaeiou nomaeiou codeEtablissement refFichier messageDefinitionIdentifier UUUUUETR ssssssInstructionId endToEndIdentification value=" " \ | eval codeEtablissement=if(codeEtablissement=="", "N/R",codeEtablissement),\ identifiantrrrr=if(identifiantrrrr=="", "N/R",identifiantrrrr),\ dateReglement=if(dateReglement=="", "N/R",dateReglement),\ ABCdonneurbbbb=if(ABCdonneurbbbb=="", "N/R",ABCdonneurbbbb), \ nomDonneurbbbb=if(nomDonneurbbbb=="", "N/R",nomDonneurbbbb),\ ABCaeiou=if(ABCaeiou=="", "N/R",ABCaeiou),\ nomaeiou=if(nomaeiou=="", "N/R",nomaeiou),\ libelleRejet=if(libelleRejet=="", "N/R",libelleRejet),\ refFichier=if(refFichier=="", "N/R",refFichier),\ application=if(application=="", "N/R",application),\ canal=if(canal=="", "N/R",canal),\ motifRejet=if(motifRejet=="", "N/R",motifRejet),\ count=if(sourcetype="Opération", 1, count),\ startDate=if(isnull(startDate), _time, startDate),\ typeOperation = if(NOT (messageDefinitionIdentifier==" " AND endToEndIdentification== " " AND ssssssInstructionId == " " AND UUUUUETR== " ") , messageDefinitionIdentifier, typeOperation), \ refTransaction = if(NOT (messageDefinitionIdentifier==" " AND endToEndIdentification== " " AND ssssssInstructionId == " " AND UUUUUETR== " ") , ssssssInstructionId, refTransaction),\ relatedRef = if(NOT (messageDefinitionIdentifier==" " AND endToEndIdentification== " " AND ssssssInstructionId == " " AND UUUUUETR== " ") , endToEndIdentification, relatedRef)\ | foreach * \ [eval <<FIELD>>=replace(<<FIELD>>, "\"","'"), <<FIELD>>=replace(<<FIELD>>, "\\\\"," "), <<FIELD>>=replace(<<FIELD>>, ",",".")]\ | eval nomDonneurbbbb=replace(nomDonneurbbbb,"[^\p{L}\s]",""), nomaeiou=replace(nomaeiou,"[^\p{L}\s]","") \ | eval nomDonneurbbbb=replace(nomDonneurbbbb,"\s{2,99}"," "), nomaeiou=replace(nomaeiou,"\s{2,99}"," ") \ | stats latest(_time) as _time, latest(Actions_xpto) as Actions_xpto, list(sens) as sens, list(phase) as phase, list(etat) as etat, list(identifiantrrrr) as identifiantrrrr, list(dateReglement) as dateReglement, list(ABCdonneurbbbb) as ABCdonneurbbbb, list(nomDonneurbbbb) as nomDonneurbbbb, list(ABCaeiou) as ABCaeiou, list(nomaeiou) as nomaeiou, list(codeEtablissement) as codeEtablissement, list(index) as index, list(count) as count, list(typeOperation) as typeOperation, list(libelleRejet) as libelleRejet , list(application) as application,latest(dateEmissionrrrr) as dateEmissionrrrr, list(canal) as canal, earliest(deviseOrigine) as deviseOrigine, earliest(deviseReglement) as deviseReglement, earliest(refbbbbClient) as refbbbbClient, list(refFichier) as refFichier, earliest(montantOrigine) as montantOrigine, earliest(montantTransfere) as montantTransfere, last(AAAADonneurbbbb) as AAAADonneurbbbb, last(AAAAaeiou) as AAAAaeiou, list(motifRejet) as motifRejet, list(refTransaction) as refTransaction, earliest(encryptedAAAAaeiou) as encryptedAAAAaeiou, earliest(encryptedAAAADonneurbbbb) as encryptedAAAADonneurbbbb, first(heureEmissionrrrr) as heureEmissionrrrr, first(sourcetype) as sourcetype, last(ABC123456Receptrice) as ABC123456Receptrice, last(ABC123456Emetrice) as ABC123456Emetrice,latest(summaryDate) as summaryDate, list(startDate) as startDate, list(endToEndIdentification) as endToEndIdentification, list(messageDefinitionIdentifier) as messageDefinitionIdentifier, list(UUUUUETR) as UUUUUETR, list(ssssssInstructionId) as ssssssInstructionId, count(eval(sourcetype="Opération")) as nbOperation, min(startDate) as minStartDate by refEstampillage\ | eval lastSummaryIndex=mvfind(index, "bam_xpto_summary"), lastSummaryIndex=if(isnull(lastSummaryIndex), -1, lastSummaryIndex)\ | foreach * \ [eval <<FIELD>>=mvindex(<<FIELD>>,0, lastSummaryIndex)]\ | eval etat=mvjoin(etat,","), phase=mvjoin(phase,","), identifiantrrrr=mvjoin(identifiantrrrr,","), dateReglement=mvjoin(dateReglement,","), ABCdonneurbbbb=mvjoin(ABCdonneurbbbb,","), nomDonneurbbbb=mvjoin(nomDonneurbbbb,","), ABCaeiou=mvjoin(ABCaeiou,","), nomaeiou=mvjoin(nomaeiou,","), codeEtablissement=mvjoin(codeEtablissement,","),application=mvjoin(application,","),canal=mvjoin(canal,","),motifRejet=mvjoin(motifRejet,","),libelleRejet =mvjoin(libelleRejet ,","),dateReglement=mvjoin(dateReglement,","),refFichier=mvjoin(refFichier,","), sens=mvjoin(sens,","), startDate=mvjoin(startDate,","), count=mvjoin(count,","), oldSummary=summaryDate, endToEndIdentification = mvjoin (endToEndIdentification, ","), messageDefinitionIdentifier = mvjoin (messageDefinitionIdentifier, ","), UUUUUETR = mvjoin(UUUUUETR, ","), ssssssInstructionId = mvjoin(ssssssInstructionId, ","), typeOperation = mvjoin(typeOperation, ","), refTransaction = mvjoin(refTransaction, ",")\ | where _time >= summaryDate OR isnull(summaryDate)\ | majoperation\ | eval count=if(nbOperation > count, nbOperation, count)\ | eval startDate=if(minStartDate<startDate,minStartDate, startDate) \ | where !(mvcount(index)==1 AND index="bam_xpto_summary") \ | fillnull codeEtablissement value="N/R"\ | fillnull refFichier value="Aucun"\ | eval summaryDate=_time, lastUpdate=now(), codeE=codeEtablissement, summaryDate_ms=mvindex(split(_time,"."),1)\ | fields - codeEtablissement index           limits.conf max_per_result_alerts = 500 Inspector Thank you again, waiting anxiously for your answer, Best regards, Ricardo Alves
Hi everyone, I want to create a Dashboard where the time filter (a customize, no preset by Splunk) will effect the result in the table. Data from table comes from a database, so I use dbxquery. Wh... See more...
Hi everyone, I want to create a Dashboard where the time filter (a customize, no preset by Splunk) will effect the result in the table. Data from table comes from a database, so I use dbxquery. When I run  the script below, I get an error:  Error in 'eval' command: The expression is malformed. Expected IN. I have no idea what is wrong, anyone has an idea please? | dbxquery connection="connection_name" [| makeresults | eval time1 = tostring("-2h@h") | eval time2 = tostring("@h") | eval time2 = if(time2=="", now(), time2) | eval time1 = if(time2=="now", relative_time(now(), time1), time1) | eval time2 = if(time2=="now", now(), time2) | eval time1 = if(match(time1,"[@|y|q|mon|w|d|h|m|s]"), relative_time(now(), time1), time1) | eval time2 = if(match(time2,"[@|y|q|mon|w|d|h|m|s]"), relative_time(now(), time2), time2) | eval time1 = strftime(time1, "%Y-%m-%d %H:%M:%S") | eval time2 = strftime(time2, "%Y-%m-%d %H:%M:%S") | eval query = "SELECT * FROM "catalog"."schema"."table" WHERE date_time BETWEEN '" . time1 . "' AND '" . time2 . "' " | return query] Thanks a lot.
Hello Splunkers, I configured my HF to pull data from an Event Hub, all good I'm receiving logs, but to much (around 130Gb/Day) and my HF often has some trouble to parse and forward the logs durin... See more...
Hello Splunkers, I configured my HF to pull data from an Event Hub, all good I'm receiving logs, but to much (around 130Gb/Day) and my HF often has some trouble to parse and forward the logs during "the peak of data". I wanted to use an additional HF in order to "share" the work but I do not know how to proceed. If I configured the Add-On on this new HF the same way I did for the first, I will just end up with duplicated data... Would you have any idea ?   Thanks, GaetanVP
Hello Splunkers, In a Splunk clustered environment, the "coldToFrozenDir" will be the same for each indexer since it's deployed by the master node for each indexers. So how Splunk will handle the... See more...
Hello Splunkers, In a Splunk clustered environment, the "coldToFrozenDir" will be the same for each indexer since it's deployed by the master node for each indexers. So how Splunk will handle the fact that each indexers has replicated  data ? Will the data be duplicated on my archive storage ? Regards, GaetanVP  
Hello, We tried to enable the SAML SSO on Splunk, We thought it's simple cause of the swap of both xml configuration data but that's not working at all. When we log in, we are redirected on an u... See more...
Hello, We tried to enable the SAML SSO on Splunk, We thought it's simple cause of the swap of both xml configuration data but that's not working at all. When we log in, we are redirected on an unknown server. We have SSO on many others applications without encountering that problem and without that URL "sh-i-XX". Could someone whose enabled SSO help me ?  Below the conf created automatically with XML Files. Thanks a lot, Dimitri
Hi Splunkers,   Has anyone used a pdf file to load/open as part of a dashboard? (Not a link to the pdf file)   Thanks in advance. Kind Regards, Ariel
Hi, I was posed a query from my customer. Is it possible to forward syslog from UF to Syslog-ng using the BSD/IETF syslog format? If so, how would one go about implementing it? Thank you in advan... See more...
Hi, I was posed a query from my customer. Is it possible to forward syslog from UF to Syslog-ng using the BSD/IETF syslog format? If so, how would one go about implementing it? Thank you in advance for any information provided. Regards, Mikhael
Hi Team, We are planning to integrate the Splunk monitoring tool in React Native mobile app. To do the same we couldn't be able to find any of the package from NPM / YARN.  Is it possible to use ... See more...
Hi Team, We are planning to integrate the Splunk monitoring tool in React Native mobile app. To do the same we couldn't be able to find any of the package from NPM / YARN.  Is it possible to use Splunk in React Native mobile apps?      
Hi, I'm trying implement Microsoft Graph Security Add-On for Splunk. I'm using Splunk Enterprise Version v8. 2022-11-29 14:19:07,357 ERROR pid=17546 tid=MainThread file=base_modinput.py:log_e... See more...
Hi, I'm trying implement Microsoft Graph Security Add-On for Splunk. I'm using Splunk Enterprise Version v8. 2022-11-29 14:19:07,357 ERROR pid=17546 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-microsoft-graph-security-add-on-for-splunk/bin/ta_microsoft_graph_security_add_on_for_splunk/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-microsoft-graph-security-add-on-for-splunk/bin/microsoft_graph_security.py", line 72, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-microsoft-graph-security-add-on-for-splunk/bin/input_module_microsoft_graph_security.py", line 63, in collect_events access_token = _get_access_token(helper) File "/opt/splunk/etc/apps/TA-microsoft-graph-security-add-on-for-splunk/bin/input_module_microsoft_graph_security.py", line 39, in _get_access_token return access_token[ACCESS_TOKEN] KeyError: 'access_token'
Hi All,  We are tring to collect the Desktop experience data in Splunk using the Uber Agent. We have installed the Splunk Enterprise trial setup on a Windows Machine and the same machine hosts th... See more...
Hi All,  We are tring to collect the Desktop experience data in Splunk using the Uber Agent. We have installed the Splunk Enterprise trial setup on a Windows Machine and the same machine hosts the Uberagent as well as an additional machine has uber agent sending logs to this server. Due to the message popup we created an index for incoming uberagent data and can see the events piling up in that index; but failed to get them populated under any dashboard or searches. Please share if there exists a step by step setup link for this architecture or suggest if there needs to be additonal configuration done to support UberAgent logs.
Hello! DSDL (Deeplearning toolkit) is set up and the Golden Image CPU container is started. I ran the example "Entity Recognition and Extraction Example for Japanese using spaCy + Ginza Library" ... See more...
Hello! DSDL (Deeplearning toolkit) is set up and the Golden Image CPU container is started. I ran the example "Entity Recognition and Extraction Example for Japanese using spaCy + Ginza Library" and an error occurred.     MLTKC error: /fit: ERROR: unable to initialize module. Ended with exception: No module named 'ja_ginza' MLTKC parameters: {'params': {'algo': 'spacy_ner_ja', 'epochs': '100'}, 'args': ['text'], 'feature_variables': ['text'], 'model_name': 'spacy_ginza_entity_extraction_model', 'output_name': 'extracted', 'algo_name': 'MLTKContainer', 'mlspl_limits': {'handle_new_cat': 'default', 'max_distinct_cat_values': '100', 'max_distinct_cat_values_for_classifiers': '100', 'max_distinct_cat_values_for_scoring': '100', 'max_fit_time': '600', 'max_inputs': '100000', 'max_memory_usage_mb': '4000', 'max_model_size_mb': '30', 'max_score_time': '600', 'use_sampling': 'true'}, 'kfold_cv': None}      Does the container "Golden Image CPU" support Japanese entity extraction? Any help would be much appreciated.Thank you!!
Greetings. We recently turned on a HEC and have JSON data coming in and I have noticed that multiple JSON blobs are embedded in _raw.  I searched several solutions and found one that actually did p... See more...
Greetings. We recently turned on a HEC and have JSON data coming in and I have noticed that multiple JSON blobs are embedded in _raw.  I searched several solutions and found one that actually did parse _raw into a new colum "split_raw" and then I went so far as to try     | eval raw=split_raw     but when I do     | table *     it still shows all the data from the first entry only I think my questions are: 1. the ones that are 'multiple json entries' I think is when 'a bunch arrive at about the same time' - so is there a way to FORCE these to split at ingestion (to guarantee 1:1 json-to-event)? my guess is i may need to play with my source_type, but looking for some guidance/thoughts. 2. if not, will have to split them (like the link above) and then do some processing on the new split_raw field? Thank you so much for leads and thoughts on this!
Before 7:05                         – green Between 7:05 and 7:45 – yellow After 7: 45                           – red How can I implement this logic in Splunk? written a Javasc... See more...
Before 7:05                         – green Between 7:05 and 7:45 – yellow After 7: 45                           – red How can I implement this logic in Splunk? written a Javascript logic but its throwing an error where < and > are used. Can someone please help me to process this? Thanks in Advance. !  
My sample logs: 2022-11-12 04: 12:34, 123 [IMP] [application thread=1:00] - http:com.ap.ddd.group.ll.clentip.DDDLLClientApplication-<overalltimetaken> (100)  11/12/22 5:12 AM to 11/25/23 5:12 AM  4 ... See more...
My sample logs: 2022-11-12 04: 12:34, 123 [IMP] [application thread=1:00] - http:com.ap.ddd.group.ll.clentip.DDDLLClientApplication-<overalltimetaken> (100)  11/12/22 5:12 AM to 11/25/23 5:12 AM  4  hr DDDLLClientApplication - Done 2022-11-12 04: 12:34, 123 [IMP] [application thread=1:00] - http:com.ap.ddd.group.ll.clentip.DDDLLClientApplication-<overalltimetaken> (100)  11/12/22 5:12 AM to 11/25/23 5:12 AM  10  hr DDDLLClientApplication - Done 2022-11-12 04: 12:34, 123 [IMP] [application thread=1:00] - http:com.ap.ddd.group.ll.clentip.DDDLLClientApplication-<overalltimetaken> (100)  11/12/22 5:12 AM to 11/25/23 5:12 AM  12  hr DDDLLClientApplication - Done here i want to get the response time from 12 hr ,10hr which are mentioned in the sample logs and  i need to get the info by using the DDDLLClientApplication - Done  i want to do field extractions for response time and info  here i want to do via sourcetype, and the type should be inline
Hello I am perplexed: when I run firebrigade, and choose "detail | index detail" and having chosen a host, my index list is incomplete. I see in the source where it gets input on line 20: "inputl... See more...
Hello I am perplexed: when I run firebrigade, and choose "detail | index detail" and having chosen a host, my index list is incomplete. I see in the source where it gets input on line 20: "inputlookup fb_hostname_index_cache". I see the contents of the fb_hostname_index_cache.csv file is the same incomplete list I found the periodic search where it extracts the data to put into "fb_hostname_index_cache.csv", the search command being: index=summary search_name="DB inspection" | dedup orig_host, orig_index | sort orig_host, orig_index | table orig_host,orig_index | outputlookup fb_hostname_index_cache When I run this search, I get "Error in 'outputlookup' command: The lookup table 'fb_hostname_index_cache' is invalid." when I run the search without "| outputlookup fb_hostname_index_cache", I get an incomplete list of my indexes. so a few things might be happening that I don't know how to determine. splunk doesn't like something about the "fb_hostname_index_cache.csv" file not all of the indexes are being returned from the query something isn't right about the contents of "index=summary" This issue appeared shortly after i upgraded from 8.0.3 to 8.1.6 to 9.0.0.1. My current system has six dedicated indexers, and an independent search head.  there are also three heavy forwarders. Can someone shed some light on this?   Thank you!    
All, I have this search       index=sro sourcetype=sro-cosmo "DL Cert OK" "Security Posture End of sweep report" | extract pairdelim="\n" kvdelim=":" | rex field=_raw "--ticket \'(?<ticket... See more...
All, I have this search       index=sro sourcetype=sro-cosmo "DL Cert OK" "Security Posture End of sweep report" | extract pairdelim="\n" kvdelim=":" | rex field=_raw "--ticket \'(?<ticket>.+)\' --summary" | fillnull value=0 | table _time ticket SA_Fail_Total_Count SA_Success_Count SA_Unreachables LP_Firmware_too_old | dedup _time ticket       That results in: But my user wants in this format: I am using Splunk 8.2.6. Is there any way to format this report? So my user does not need to manipulate it in Excel? Thank you, Gerson Garcia
Hello there! I'm trying to ingest JSON data via the Splunk Add-on for Microsoft Cloud Services app.  I created a sourcetype with INDEXED_EXTRACTIONS=json and left all other settings to their defaul... See more...
Hello there! I'm trying to ingest JSON data via the Splunk Add-on for Microsoft Cloud Services app.  I created a sourcetype with INDEXED_EXTRACTIONS=json and left all other settings to their default values.  The data got ingested, however, when I table my events I start seeing mv fields with duplicate data.  I'm even seeing the "Interesting Fields" section add up to 200% (instead of the expected 100%). Sourcetype settings   Interesting Fields   MV Fields with duplicate data   https://community.splunk.com/t5/All-Apps-and-Add-ons/JSON-format-Duplicate-value-in-field/m-p/306811 I then followed the advice given in this post ^^^ (i.e., setting KV_MODE=none, AUTO_KV_JSON=false, etc.) but the issue persists. I have attached screenshots to this post to better understand my situation.  I'm currently on Splunk Cloud. Any help with this is greatly appreciated
  index="main" sourcetype="vrea" | eval nested_payload=mvzip(info, solution, "---") | mvexpand nested_payload | eval info=mvindex(split(nested_payload, "---"), 1) | eval solution=mvindex(split(nest... See more...
  index="main" sourcetype="vrea" | eval nested_payload=mvzip(info, solution, "---") | mvexpand nested_payload | eval info=mvindex(split(nested_payload, "---"), 1) | eval solution=mvindex(split(nested_payload, "---"), 0) | eval nested_payload=mvzip(line, more, "---") | mvexpand nested_payload | eval line=mvindex(split(nested_payload, "---"), 1) | eval more=mvindex(split(nested_payload, "---"), 0) | eval nested_payload=mvzip(ID, Severity, "---") | mvexpand nested_payload | eval Severity=mvindex(split(nested_payload, "---"), 1) | eval CWE_ID=mvindex(split(nested_payload, "---"), 0) | table info solution ID Severity line more     when use the SPL my table fields value first 4 fields keep repeating same value but last 2 field "line and more" correct value?  anyone know why it is happening?
Good Afternoon, I'm new to splunk - I've pulled a copy of the demo software and have question concerning forwarders - Are forwarders required to be installed on each device supplying logs or can ... See more...
Good Afternoon, I'm new to splunk - I've pulled a copy of the demo software and have question concerning forwarders - Are forwarders required to be installed on each device supplying logs or can one central forwarder "receive" logs from multiple devices (i.e. windows, linux, cisco switches)? I want to setup a raspberry pi to receive logs from a few low use windows boxes, and linux boxes, possibly a switch or two.   Thanks in advance   John Bond 
Hi All. I am trying to calculate the response time from the logs below. 11-12-2019 23:34:45, 678 this event will calculate the sign in and sign out of the application, Success,=67, failed=121,... See more...
Hi All. I am trying to calculate the response time from the logs below. 11-12-2019 23:34:45, 678 this event will calculate the sign in and sign out of the application, Success,=67, failed=121, |sumsuo=1.0|CompleteTime=100sec 11-12-2019 23:34:45, 678 this event will calculate the sign in and sign out of the application, Success,=67, failed=121, |sumsuo=1.0|CompleteTime=10sec 11-12-2019 23:34:45, 678 this event will calculate the sign in and sign out of the application, Success,=67, failed=121, |sumsuo=1.0|CompleteTime=50sec 11-12-2019 23:34:45, 678 this event will calculate the sign in and sign out of the application, Success,=67, failed=121, |sumsuo=1.0|CompleteTime=40sec 11-12-2019 23:34:45, 678 this event will calculate the sign in and sign out of the application, Success,=67, failed=121, |sumsuo=1.0|CompleteTime=130sec       |tstats count where index=xxxx host=abc OR host=cvb OR host=dgf OR host=ujh sourcetype=xxxx by PREFIX(completetime=) |rename completetime= as Time |timechart span=1d avg(Time) by host |eval ResTime =round(,Time2)   When i am trying to run this query i am not bale to calculate the average of time because when i am doing PREFIX(completetime=) here sec word is also taking up. How can i ignore it.