Hi Sunil, Thanks for your response. I am able to complete the steps from your solution and get the access token. But when I try to use the access token to access other APIs. Example: curl --user...
See more...
Hi Sunil, Thanks for your response. I am able to complete the steps from your solution and get the access token. But when I try to use the access token to access other APIs. Example: curl --user <username>@customer1:<password> -H "Authorization:Bearer <ACCESS TOKEN> "https://<controller page>/controller/rest/applications" I am getting the following error: <html><body><h1>500 Internal Server Error</h1><br/>Exception Id:f08d81b3-42c6-4a1e-a9ad-882dd210bad9<br/></body></html> The same response when I try without the Access token as well. I am not sure why this happens.
Hi @bmanikya, to help you in a regex extraction, you should share your events in text mode (eventually using the Insert/Edit Code Sample button), highlighting the parts to extract. Ciao. Giuseppe
HI I need to get the count of all fields in some index and then calculate how many times in percentage it occurred out of all events. hope its clear. thank you!
Hi @yuanliu I just renamed some fields, here is the exact one, I had modified few things based your reply.
<input type="checkbox" token="index_scope" searchWhenChanged="true">
<label>Cho...
See more...
Hi @yuanliu I just renamed some fields, here is the exact one, I had modified few things based your reply.
<input type="checkbox" token="index_scope" searchWhenChanged="true">
<label>Choose console</label>
<choice value="1T*">Standard</choice>
<choice value="2A*">Scada</choice>
<choice value="2S*">AWS</choice>
<default>1T*</default>
<initialValue>1T*</initialValue>
</input>|
Here is the search
`compliance($index_scope$, now(), $timerange$, $scope$, $origin$, $country$, $cacp$)`
It's not working as expected in multiselect, earlier for dropdown its working good. Thanks!
Above is the event, not sure why this is showing up as two different events. Anyways, I have written a splunk query according to my requirements but output is not good. I want to get rid ...
See more...
Above is the event, not sure why this is showing up as two different events. Anyways, I have written a splunk query according to my requirements but output is not good. I want to get rid of Service and Maintenance Start time in MST.
True. But the same app is getting pushed to indexers and to SHs so your REST querying for transform definition should return the same result instead of whether it's called against SH or idx.
You've never scripted on unices, have you? But seriously - that's kinda obvious. I'd say Write-Output is like writing to stdout whereas Write-Host is more like writing to stderr (yes, I know that...
See more...
You've never scripted on unices, have you? But seriously - that's kinda obvious. I'd say Write-Output is like writing to stdout whereas Write-Host is more like writing to stderr (yes, I know that this analogy is not 100% correct).
Thanks to reddit user u/chadbaldwin who pointed out that the fault was in using `Write-Host` rather than `Write-Output`; whereas `Write-Host` isn't something Splunk is able to capture. Replaced the ...
See more...
Thanks to reddit user u/chadbaldwin who pointed out that the fault was in using `Write-Host` rather than `Write-Output`; whereas `Write-Host` isn't something Splunk is able to capture. Replaced the script to use `Write-Output` and it's now working.
Hello, I have a saved search that pushes data to summary index. The summary index has data for last 2 years and data volume is really huge. Suppose I want to add a new field to this data in summary ...
See more...
Hello, I have a saved search that pushes data to summary index. The summary index has data for last 2 years and data volume is really huge. Suppose I want to add a new field to this data in summary index, I need to re run search for last two years. Since the volume is huge, if I try to run the search for all 2 years data in one time, the search fails or data gets missed. To avoid this, I'll be pushing data in 10 days batch or 30 days batch. For example - if I have to repopulate my summary index after adding a new field. So, for first batch I'll run for data from 1st Aug 2023 to 10th Aug 2023. Next batch I'll run from 11th Aug to 20th Aug. Similar thing has to be done for past two years of data to be pushed in summary index. This task is very cumbersome . Is there a way to automate this task in splunk. Can I schedule my search in such a way that while repushing data , without manual intervention data gets pushed in 10 days batch in summary index?
Buenas comunidad quería saber si es posible, cuando tienes un splunk de manera local implementar maquinas "instancias" nuevas para monitorizarlas junto a la local eso es todo si es posible , me gusta...
See more...
Buenas comunidad quería saber si es posible, cuando tienes un splunk de manera local implementar maquinas "instancias" nuevas para monitorizarlas junto a la local eso es todo si es posible , me gustaría que se me dejase el link de el procedimiento a seguir para completar esta tarea Un saludo y gracias
Hi @ITWhisperer ! Thanks for your response! It is working fine, it always selected two values in dashboard 2 even if we are selecting one value in dashboard 1, For Eg. If we are selectin...
See more...
Hi @ITWhisperer ! Thanks for your response! It is working fine, it always selected two values in dashboard 2 even if we are selecting one value in dashboard 1, For Eg. If we are selecting "Front Office" in Dashboard 1, It shows both values "Front Office" and "Back Office" in Dashboard 2. Thanks!
Hi @bt149, for the lookup population search you could try something like this: <your_search>
| stats
count
earliest(_time) AS first_event
latest(_time) AS last_event
BY host
| outputlo...
See more...
Hi @bt149, for the lookup population search you could try something like this: <your_search>
| stats
count
earliest(_time) AS first_event
latest(_time) AS last_event
BY host
| outputlookup your_lookup.csv for the alert the fires eventual missing hosts, you could try: <your_search>
| stats
count
BY host
| append [ | your_lookup | eval count=0 | fields host count]
| stats
sum(count) AS count
BY host
| where count=0 Ciao. Giuseppe
I have a lookup file. Lookup has "host", "count", "first_event" and "last_event" fields. I want to run a search hourly that will update all the fields with fresh values and in the event that a "hos...
See more...
I have a lookup file. Lookup has "host", "count", "first_event" and "last_event" fields. I want to run a search hourly that will update all the fields with fresh values and in the event that a "host" is not found in the search send an alert. Any guidance would be appreciated.
Hi @john_snow00, sorry, where is the timestamp? if it isn't contained in the event, it's added by Splunk. Anyway, you could run something like this: <your_search>
| rex "Rate\s+(?<Bytes>\d+)\/sec...
See more...
Hi @john_snow00, sorry, where is the timestamp? if it isn't contained in the event, it's added by Splunk. Anyway, you could run something like this: <your_search>
| rex "Rate\s+(?<Bytes>\d+)\/sec"
| eval MB=Bytes/1024/1024
| timechart sum(MB) AS MB I also added the regex to extract the field, if you already have it, don't use my regex. Ciao. Giuseppe
Lessons learned: 1) Use btool (or REST in case of Cloud) to see effective config. 2) Use unique naming schema in order not to accidentally clash with settings from other chunks of config.
I have regular traffic passing through my server. The server has the IP 10.41.6.222 My goal is to extract the Rate /sec passing through the server and to be able to see theRate /sec in a graph an h...
See more...
I have regular traffic passing through my server. The server has the IP 10.41.6.222 My goal is to extract the Rate /sec passing through the server and to be able to see theRate /sec in a graph an having x asis showing time and y axis Rate /sec (extracted values). ----------------------------------------------------------------------------------------------------------------------------------- Rate 0/sec : Bytes 9815772 : from owa client to vs_owa with address 10.41.6.166:443:10.41.6.222Rate 402/sec : Bytes 9816135 : from owa client to vs_owa with address 10.41.6.166:443:10.41.6.222Rate 139587/sec : Bytes 10004146 : from owa client to vs_owa with address 10.41.6.166:443:10.41.6.222Rate 147636/sec : Bytes 10009645 : from owa client to vs_owa with address 10.41.6.166:443:10.41.6.222Rate 69967/sec : Bytes 10358668 : from owa client to vs_owa with address 10.41.6.166:443:10.41.6.222Rate 69967/sec : Bytes 10361672 : from owa client to vs_owa with address 10.41.6.166:443:10.41.6.222Rate 69967/sec : Bytes 10364579 : from owa client to vs_owa with address 10.41.6.166:443:10.41.6.222Rate 69967/sec : Bytes 10364667 : from owa client to vs_owa with address 10.41.6.166:443:10.41.6.222Rate 49661/sec : Bytes 10371887 : from owa client to vs_owa with address 10.41.6.166:443:10.41.6.222Rate 217793/sec : Bytes 10700517 : from owa client to vs_owa with address 10.41.6.166:443:10.41.6.222Rate 353829/sec : Bytes 10944230 : from owa client to vs_owa with address 10.41.6.166:443:10.41.6.222Rate 93689/sec : Bytes 10946290 : from owa client to vs_owa with address 10.41.6.166:443:10.41.6.222Rate 82030/sec : Bytes 10950753 : from owa client to vs_owa with address
Hi @gcusello , it was setnull stanza which was being used by another app and was taking precedence over this one that is why it was not taken into consideration . i changed the setnull stanza in tr...
See more...
Hi @gcusello , it was setnull stanza which was being used by another app and was taking precedence over this one that is why it was not taken into consideration . i changed the setnull stanza in tranforms to a more meaningful & unique name and that worked .. Thanks a lot for your help.
Hi @gcusello thanks for your reply. Agree with what you suggested. However, I found it challenging to recognize stanzas that can be used for completing each individual field. I am using Vladiator an...
See more...
Hi @gcusello thanks for your reply. Agree with what you suggested. However, I found it challenging to recognize stanzas that can be used for completing each individual field. I am using Vladiator and filling in gaps that way. From what you said "Following your example: WinRegMon belongs to the Splunk_TA_Windows Add-On that's CIM 4.x Compliant, so you don't need to perform any action." WinRegMon was there but I had to discover it myself as all options that come under the default folder/inputs.conf are lots and they are disabled by default for the user to decide on which ones to enable and for what purpose. Would you be able to help me identify the best and appropriate way to decide on how to enable the Ports Data Set's fields (currently not getting any data in...is ti coming from SysMon or any other data sourcetype?) from the Endpoint Data Model? Hope you can understand my challenge.