All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Does it have to use an HF?  The Export Everything app (https://splunkbase.splunk.com/app/5738) can write to S3
Hi @Sikha.Singh, Sorry about that. I missed that the doc was for On prem. This is all I was able to find: https://docs.appdynamics.com/appd/23.x/latest/en/extend-appdynamics/appdynamics-apis/contro... See more...
Hi @Sikha.Singh, Sorry about that. I missed that the doc was for On prem. This is all I was able to find: https://docs.appdynamics.com/appd/23.x/latest/en/extend-appdynamics/appdynamics-apis/controller-audit-history-api You can also try contacting AppD Support. How do I submit a Support ticket? An FAQ 
Hi @Nour.Alghamdi, Thanks for sharing the Doc link. Just to be clear, you found the solution and you no longer need help? 
Thank you also for the same solution and the additional context, this is really helpful.
Thank You for the help, the missing by clause worked (by _time)
Hello, Do we have any SPLUNK TA that can write logs from SPLUNK Server with HF to AWS S3/SQS.  Any recommendation will be highly appreciated, thank you! 
Notice that your requested output has more rows than the original input rows. To do this would require some sort of transformation, one way could to use an mvexpand method and would look something li... See more...
Notice that your requested output has more rows than the original input rows. To do this would require some sort of transformation, one way could to use an mvexpand method and would look something like this. <base_search> | eval field3=mvappend(field1, field2) | fields + field3 | mvexpand field3 | sort 0 +field3 You can see in the screenshot that field3 is in your requested format Full SPL to replicate | makeresults count=5 | streamstats count as field1 | eval field2=case( 'field1'==1, 10, 'field1'==2, 12, True(), null() ) | fields - _time ``` mvexpand method ``` | eval field3=mvappend(field1, field2) | mvexpand field3 | sort 0 +field3 Another method would be append (subsearches can be truncated if you hit any splunk limits) something like this <base_search> field1=* | eval field3='field1' | fields + field3 | append [ | search <base_search> field2=* | eval field3='field2' | fields + field3 ] Full SPL to replicate | makeresults count=5 | streamstats count as field1 | eval field2=case( 'field1'==1, 10, 'field1'==2, 12, True(), null() ) | fields - _time | search field1=* | eval field3='field1' ``` append method ``` | append [ | makeresults count=5 | streamstats count as field1 | eval field2=case( 'field1'==1, 10, 'field1'==2, 12, True(), null() ) | fields - _time | search field2=* | eval field3='field2' ] I bet there is also a slick way of using appendpipe command to achieve this as well. <base_search> | appendpipe [ | stats values(field2) as field2 ] | eval field3=coalesce(field1, field2) | mvexpand field3 output looks like this Full SPL to replicate | makeresults count=5 | streamstats count as field1 | eval field2=case( 'field1'==1, 10, 'field1'==2, 12, True(), null() ) | fields - _time ``` appendpipe method ``` | appendpipe [ | stats values(field2) as field2 ] | eval field3=coalesce(field1, field2) | mvexpand field3
this returns me output https://18.135.285.129/11.20.27.83/api/ The first IP is the one that is configured in the asset and the second is the one that I take dynamically through an artifact, so I nee... See more...
this returns me output https://18.135.285.129/11.20.27.83/api/ The first IP is the one that is configured in the asset and the second is the one that I take dynamically through an artifact, so I need it not to take the first one that is configured by default, this is in Soar
I have below json and I want table of url and corresponding duration.   { "details": { "sub-trans": [ { "app-trans-id": "123", "sub-trans-id": "234", "startTime": "2024-01-18T12:37:12.482Z", ... See more...
I have below json and I want table of url and corresponding duration.   { "details": { "sub-trans": [ { "app-trans-id": "123", "sub-trans-id": "234", "startTime": "2024-01-18T12:37:12.482Z", "endTime": "2024-01-18T12:37:12.502Z", "duration": 20, "req": { "url": "http://abc123", }, { "app-trans-id": "123", "sub-trans-id": "567", "startTime": "2024-01-18T12:37:12.506Z", "endTime": "2024-01-18T12:37:12.550Z", "duration": 44, "req": { "url": "https://xyz567", }, ] } }   I am using below splunk query but duration field is not populating in table. Kindly help index=hello |spath output=url details.sub-trans{}.req.url| mvexpand url |spath output=duration details.sub-trans{}.duration |mvexpand duration |table url,duration
I got 2 fields from same splunk index field1 have rows 1,2,3,4,5 and field2 have rows 10,12 I want new field3 with data from both field1 and field2. Please suggest. field1   field2 1   ... See more...
I got 2 fields from same splunk index field1 have rows 1,2,3,4,5 and field2 have rows 10,12 I want new field3 with data from both field1 and field2. Please suggest. field1   field2 1   10 2   12 3     4     5       field3 1 2 3 4 5 10 12
Hi @Strangertinz , you could run a simple search like the following: | tstats count WHERE index=* BY index on the last 30 days. Ciao. Giuseppe
Hola buen día comunidad, tengo un problema y espero me puedan ayudar, yo necesito configurar un asset de la app http para que haga una peticion get, al configurarla en la pestaña de asset settings ha... See more...
Hola buen día comunidad, tengo un problema y espero me puedan ayudar, yo necesito configurar un asset de la app http para que haga una peticion get, al configurarla en la pestaña de asset settings hay un campo que se llama base_url este es obligatorio de llenar, el detalle es que esa base url es dinamica la voy a tomar de los artifacts por medio de un flujo, cada url es diferente,  hasta ahora no e podido resolverlo, espero su ayuda gracias
Can you share some anonymised sample events and your expected output?
By the look of your screenshot shared, it appears that as a result of the  | stats values(NumberOfAuthErrors) AS NumberOfAuthErrors, values(TotalRequest) AS TotalRequest is returning you two multiv... See more...
By the look of your screenshot shared, it appears that as a result of the  | stats values(NumberOfAuthErrors) AS NumberOfAuthErrors, values(TotalRequest) AS TotalRequest is returning you two multivalued fields, so the eval is not working as intended. Try putting this stats with an additional by-field of _time again, that way each NumberOfAuthErrors and TotalRequest values should only have 1 value for each 15 minute interval and then the eval will probably work. If for whatever reason you are trying to sum up each row of two multivalued fields (Don't really know why you would want to do this), I would stay away from using stats values() as this is going to dedup values and then I believe sort them. using stats list() instead will retain the original order, but even then, if one of the datasets is missing events in one or more of the 15 minute intervals, then number will again be misaligned. You would be better off just using a stats by-field of _time again, something like this. [SEARCH] | bin _time span=15m | stats count as NumberOfAuthErrors by _time | append [ SEARCH | bin _time span=15m | stats count as TotalRequest by _time ] | stats values(NumberOfAuthErrors) AS NumberOfAuthErrors, values(TotalRequest) AS TotalRequest by _time | eval failureRate = round((NumberOfAuthErrors / TotalRequest) * 100,3) | table _time, TotalRequest NumberOfAuthErrors failureRate If you just want the overall failureReate through the entire timespan the using a stats sum() will probably be the way to go. [SEARCH] | bin _time span=15m | stats count as NumberOfAuthErrors by _time | append [ SEARCH | bin _time span=15m | stats count as TotalRequest by _time ] | stats sum(NumberOfAuthErrors) AS NumberOfAuthErrors, sum(TotalRequest) AS TotalRequest | eval failureRate = round((NumberOfAuthErrors / TotalRequest) * 100,3) | table TotalRequest NumberOfAuthErrors failureRate  
Try including by _time on this line | stats values(NumberOfAuthErrors) AS NumberOfAuthErrors, values(TotalRequest) AS TotalRequest by _time
Probably a few ways of doing this, but if you have access to index=_internal you can try something like this. index=_internal component=Metrics group=per_index_thruput earliest=-30d@d latest=now ... See more...
Probably a few ways of doing this, but if you have access to index=_internal you can try something like this. index=_internal component=Metrics group=per_index_thruput earliest=-30d@d latest=now | bucket span=1h _time | stats sum(kb) as hourly_kb, sum(ev) as hourly_events, by _time, series | stats earliest(_time) as earliest_event, latest(_time) as latest_event, count as sample_size, avg(hourly_kb) as avg_hourly_kb, sum(hourly_kb) as total_kb, avg(hourly_events) as avg_hourly_events, sum(hourly_events) as total_events by series | convert ctime(earliest_event), ctime(latest_event) | rename series as index
I want the base url of the asset of the http app to be dynamic and filled with the information that I take from the sources (artifact) through a flow, as I would do to replace the one that is inserte... See more...
I want the base url of the asset of the http app to be dynamic and filled with the information that I take from the sources (artifact) through a flow, as I would do to replace the one that is inserted in the asset By default, this is in the http app with the get data method
Hi All, I'm trying to calculate the failureRate as a percentage between the NumberOfAuthErrors column and the TotalRequest column, but i do not get any values. I do have two columns of values. I wo... See more...
Hi All, I'm trying to calculate the failureRate as a percentage between the NumberOfAuthErrors column and the TotalRequest column, but i do not get any values. I do have two columns of values. I would like to calculate the failureRate for each ROW.   [SEARCH] | bin _time span=15m | stats count as NumberOfAuthErrors by _time | append [ SEARCH | bin _time span=15m | stats count as TotalRequest by _time ] | stats values(NumberOfAuthErrors) AS NumberOfAuthErrors, values(TotalRequest) AS TotalRequest | eval failureRate = round((NumberOfAuthErrors / TotalRequest) * 100,3) | table TotalRequest NumberOfAuthErrors failureRate     thanks
Hi,  I am looking for a search to list out all of the indexes in Splunk. I know how to get the full but looking for a clear way to get a list of the ones being used and actively received data with... See more...
Hi,  I am looking for a search to list out all of the indexes in Splunk. I know how to get the full but looking for a clear way to get a list of the ones being used and actively received data within the last 30 days.   Thanks in advance!    
Thank you mate for the help,  Below corrected one helped with faster results.   |chart latest(Count) as Count by ProcessDate,Name |sort 0 - ProcessDate | transpose 0 column_name=Name header_field... See more...
Thank you mate for the help,  Below corrected one helped with faster results.   |chart latest(Count) as Count by ProcessDate,Name |sort 0 - ProcessDate | transpose 0 column_name=Name header_field=ProcDate