All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks you very much , your solution worked perfectly.  
@bowesmana  Your suggested solution solved memory issue. Thank you!! 
By going off what you pasted it is coming back as an invalid JSON, I would check that first. But assuming that it is just a copy/paste error and you do have a valid json object as _raw then I wo... See more...
By going off what you pasted it is coming back as an invalid JSON, I would check that first. But assuming that it is just a copy/paste error and you do have a valid json object as _raw then I would probably do an spath like this to retain associations between url and durations. index=hello | spath input=_raw path=details.sub-trans{} output=sub_trans | fields - _raw | table sub_trans | mvexpand sub_trans | spath input=sub_trans | fields - sub_trans   You can see here all the field are extracted and they maintained their relationships to their individual url/duration according to the structure of detail.sub-trans{} array. Does require an mvexpand though, just keep an eye out for memory limits. To retain specific associations of the url to its respective duration by extracting both as individual multivalued fields is possible but can be problematic. If any of them have a null entry for whatever reason then all associations are thrown off from that point on. Thats why in these sort of situations I would much rather extract the entire nested json object out of the array, mvexpand that, then spath that internal json.  Also want to note that doing a mvexpand against two multivalue fields like in your original search will completely loose all association between which url should have which duration. you will actually end up with N^2 results when by the structure of the json I believe there should only be N results.
yes it does. this actually worked. appreciate a ton
@richgalloway  Thank you so much for your quick response. It's not exporting SPLUNK search results, it about writing Logs into S3 bucket using SPLUNK TA. For Example, we have some Application logs ... See more...
@richgalloway  Thank you so much for your quick response. It's not exporting SPLUNK search results, it about writing Logs into S3 bucket using SPLUNK TA. For Example, we have some Application logs within server, we would prefer to use SPLUNK TA to write those logs into S3 Buckets from there and ingest data from S3/SQS. This server has the HF install on them. We cannot perform direct ingestion from that server due to security reason.  Any thoughts or recommendations
The duration field populates in my sandbox, but values are duplicated.  That's likely because the two mvexpand calls break the association between url and duration.  Try this query, instead: index=h... See more...
The duration field populates in my sandbox, but values are duplicated.  That's likely because the two mvexpand calls break the association between url and duration.  Try this query, instead: index=hello | spath output=url details.sub-trans{}.req.url | spath output=duration details.sub-trans{}.duration ``` Combine url and duration ``` | eval pairs=mvzip(url,duration) ``` Put each pair into a separate event ``` | mvexpand pairs ``` Extract the url and duration fields ``` | eval pairs=split(pairs,","), url=mvindex(pairs,0), duration=mvindex(pairs,1) | table url,duration  
Does it have to use an HF?  The Export Everything app (https://splunkbase.splunk.com/app/5738) can write to S3
Hi @Sikha.Singh, Sorry about that. I missed that the doc was for On prem. This is all I was able to find: https://docs.appdynamics.com/appd/23.x/latest/en/extend-appdynamics/appdynamics-apis/contro... See more...
Hi @Sikha.Singh, Sorry about that. I missed that the doc was for On prem. This is all I was able to find: https://docs.appdynamics.com/appd/23.x/latest/en/extend-appdynamics/appdynamics-apis/controller-audit-history-api You can also try contacting AppD Support. How do I submit a Support ticket? An FAQ 
Hi @Nour.Alghamdi, Thanks for sharing the Doc link. Just to be clear, you found the solution and you no longer need help? 
Thank you also for the same solution and the additional context, this is really helpful.
Thank You for the help, the missing by clause worked (by _time)
Hello, Do we have any SPLUNK TA that can write logs from SPLUNK Server with HF to AWS S3/SQS.  Any recommendation will be highly appreciated, thank you! 
Notice that your requested output has more rows than the original input rows. To do this would require some sort of transformation, one way could to use an mvexpand method and would look something li... See more...
Notice that your requested output has more rows than the original input rows. To do this would require some sort of transformation, one way could to use an mvexpand method and would look something like this. <base_search> | eval field3=mvappend(field1, field2) | fields + field3 | mvexpand field3 | sort 0 +field3 You can see in the screenshot that field3 is in your requested format Full SPL to replicate | makeresults count=5 | streamstats count as field1 | eval field2=case( 'field1'==1, 10, 'field1'==2, 12, True(), null() ) | fields - _time ``` mvexpand method ``` | eval field3=mvappend(field1, field2) | mvexpand field3 | sort 0 +field3 Another method would be append (subsearches can be truncated if you hit any splunk limits) something like this <base_search> field1=* | eval field3='field1' | fields + field3 | append [ | search <base_search> field2=* | eval field3='field2' | fields + field3 ] Full SPL to replicate | makeresults count=5 | streamstats count as field1 | eval field2=case( 'field1'==1, 10, 'field1'==2, 12, True(), null() ) | fields - _time | search field1=* | eval field3='field1' ``` append method ``` | append [ | makeresults count=5 | streamstats count as field1 | eval field2=case( 'field1'==1, 10, 'field1'==2, 12, True(), null() ) | fields - _time | search field2=* | eval field3='field2' ] I bet there is also a slick way of using appendpipe command to achieve this as well. <base_search> | appendpipe [ | stats values(field2) as field2 ] | eval field3=coalesce(field1, field2) | mvexpand field3 output looks like this Full SPL to replicate | makeresults count=5 | streamstats count as field1 | eval field2=case( 'field1'==1, 10, 'field1'==2, 12, True(), null() ) | fields - _time ``` appendpipe method ``` | appendpipe [ | stats values(field2) as field2 ] | eval field3=coalesce(field1, field2) | mvexpand field3
this returns me output https://18.135.285.129/11.20.27.83/api/ The first IP is the one that is configured in the asset and the second is the one that I take dynamically through an artifact, so I nee... See more...
this returns me output https://18.135.285.129/11.20.27.83/api/ The first IP is the one that is configured in the asset and the second is the one that I take dynamically through an artifact, so I need it not to take the first one that is configured by default, this is in Soar
I have below json and I want table of url and corresponding duration.   { "details": { "sub-trans": [ { "app-trans-id": "123", "sub-trans-id": "234", "startTime": "2024-01-18T12:37:12.482Z", ... See more...
I have below json and I want table of url and corresponding duration.   { "details": { "sub-trans": [ { "app-trans-id": "123", "sub-trans-id": "234", "startTime": "2024-01-18T12:37:12.482Z", "endTime": "2024-01-18T12:37:12.502Z", "duration": 20, "req": { "url": "http://abc123", }, { "app-trans-id": "123", "sub-trans-id": "567", "startTime": "2024-01-18T12:37:12.506Z", "endTime": "2024-01-18T12:37:12.550Z", "duration": 44, "req": { "url": "https://xyz567", }, ] } }   I am using below splunk query but duration field is not populating in table. Kindly help index=hello |spath output=url details.sub-trans{}.req.url| mvexpand url |spath output=duration details.sub-trans{}.duration |mvexpand duration |table url,duration
I got 2 fields from same splunk index field1 have rows 1,2,3,4,5 and field2 have rows 10,12 I want new field3 with data from both field1 and field2. Please suggest. field1   field2 1   ... See more...
I got 2 fields from same splunk index field1 have rows 1,2,3,4,5 and field2 have rows 10,12 I want new field3 with data from both field1 and field2. Please suggest. field1   field2 1   10 2   12 3     4     5       field3 1 2 3 4 5 10 12
Hi @Strangertinz , you could run a simple search like the following: | tstats count WHERE index=* BY index on the last 30 days. Ciao. Giuseppe
Hola buen día comunidad, tengo un problema y espero me puedan ayudar, yo necesito configurar un asset de la app http para que haga una peticion get, al configurarla en la pestaña de asset settings ha... See more...
Hola buen día comunidad, tengo un problema y espero me puedan ayudar, yo necesito configurar un asset de la app http para que haga una peticion get, al configurarla en la pestaña de asset settings hay un campo que se llama base_url este es obligatorio de llenar, el detalle es que esa base url es dinamica la voy a tomar de los artifacts por medio de un flujo, cada url es diferente,  hasta ahora no e podido resolverlo, espero su ayuda gracias
Can you share some anonymised sample events and your expected output?
By the look of your screenshot shared, it appears that as a result of the  | stats values(NumberOfAuthErrors) AS NumberOfAuthErrors, values(TotalRequest) AS TotalRequest is returning you two multiv... See more...
By the look of your screenshot shared, it appears that as a result of the  | stats values(NumberOfAuthErrors) AS NumberOfAuthErrors, values(TotalRequest) AS TotalRequest is returning you two multivalued fields, so the eval is not working as intended. Try putting this stats with an additional by-field of _time again, that way each NumberOfAuthErrors and TotalRequest values should only have 1 value for each 15 minute interval and then the eval will probably work. If for whatever reason you are trying to sum up each row of two multivalued fields (Don't really know why you would want to do this), I would stay away from using stats values() as this is going to dedup values and then I believe sort them. using stats list() instead will retain the original order, but even then, if one of the datasets is missing events in one or more of the 15 minute intervals, then number will again be misaligned. You would be better off just using a stats by-field of _time again, something like this. [SEARCH] | bin _time span=15m | stats count as NumberOfAuthErrors by _time | append [ SEARCH | bin _time span=15m | stats count as TotalRequest by _time ] | stats values(NumberOfAuthErrors) AS NumberOfAuthErrors, values(TotalRequest) AS TotalRequest by _time | eval failureRate = round((NumberOfAuthErrors / TotalRequest) * 100,3) | table _time, TotalRequest NumberOfAuthErrors failureRate If you just want the overall failureReate through the entire timespan the using a stats sum() will probably be the way to go. [SEARCH] | bin _time span=15m | stats count as NumberOfAuthErrors by _time | append [ SEARCH | bin _time span=15m | stats count as TotalRequest by _time ] | stats sum(NumberOfAuthErrors) AS NumberOfAuthErrors, sum(TotalRequest) AS TotalRequest | eval failureRate = round((NumberOfAuthErrors / TotalRequest) * 100,3) | table TotalRequest NumberOfAuthErrors failureRate