All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks both of you guys!
Hi @ITWhisperer  Below are the raw events  which are need to be displayed in table format in a single row for below events with no common key value {"name":"","awsRequestId":"","hostname":"","pid":... See more...
Hi @ITWhisperer  Below are the raw events  which are need to be displayed in table format in a single row for below events with no common key value {"name":"","awsRequestId":"","hostname":"","pid":8,"level":30,"uniqObjectIds":["275649"],"uniqObjectIdsCount":1,"msg":"unique objectIds","time":"2023-11-03T19:26:43.672Z","v":0} {"name":"","awsRequestId":"","hostname":"","pid":8,"level":30,"uniqueRetrievedIds":["275649"],"msg":"data retrieved for Ids","time":"2023-11-06T22:48:03.594Z","v":0} {"name":"","awsRequestId":"","hostname":"","pid":8,"level":30,"eventBody":{"objectType":"material","objectIds":["275649","108795","1234567","1234568","99999999","888888888"],"version":"all"},"msg":"request body","time":"2023-11-03T05:25:33.508Z","v":0}
Hi,  I am trying to upload the dSYM files automatically in the pipeline by hitting the Appdynamics REST APIs. Would like to know, how can I do it using API tokens?  1. I want to generate the token ... See more...
Hi,  I am trying to upload the dSYM files automatically in the pipeline by hitting the Appdynamics REST APIs. Would like to know, how can I do it using API tokens?  1. I want to generate the token using Appdynamics REST API.   The token generation API requires both an authentication header with username and password as well as the oAuth request body to successfully request a token. We use only SAML login. Do I need to create a local account for this purpose? Then, how long the API token can live? 2. API Clients (appdynamics.com) When I generate the token via Admin UI, it shows the max is 30days. Then it needs to be regenerated.  Any comments on it? Appreciate your inputs on this.  Thanks,  Viji
After installing the latest UF 9.1.1 on a linux i tried to  connect it to the deployment server ./splunk set deploy-poll <host name or ip address>:<management port> i get an "error" with allowRemote... See more...
After installing the latest UF 9.1.1 on a linux i tried to  connect it to the deployment server ./splunk set deploy-poll <host name or ip address>:<management port> i get an "error" with allowRemoteLogin and the deployment.con  is not created  after i added the following entry in the server.conf, the command added successfuly the string to connect to the deployment server allowRemoteLogin = always anyone experiencing the same issue?
Since these come from the same raw event(?) you could regather the fields with a stats command | stats values(*) as * by _raw You may need to add _raw to your list of fields in the table command or... See more...
Since these come from the same raw event(?) you could regather the fields with a stats command | stats values(*) as * by _raw You may need to add _raw to your list of fields in the table command or use another field which is unique to the original event, e.g. _time
Please share the raw unformatted sample event in a code block </> to preserve the original formatting.
I'm not familiar with conf editor.  I recommend making Splunk Cloud config changes locally and then uploading an app.  That means you always have a copy of your configs locally.
This appears to work for me on 9.1.1.  Please can you try cutting down your dashboard to isolate the issue, then post the source code.
Yes, you can.  Please read inputs.conf.spec.
What is your way of sorting, groupping, ordering and so on is up to you. "My" part only did the limiting.
Great Solution! But there was a typo and it disregarded the amount of count. Added a sort to your solution. <your_search> | stats count by user | sort - count | eventstats count as total | streams... See more...
Great Solution! But there was a typo and it disregarded the amount of count. Added a sort to your solution. <your_search> | stats count by user | sort - count | eventstats count as total | streamstats count as current | where current<=0.15*total
Thanks @ITWhisperer the above spath query which worked and was able to form a table view without duplicate. How can i combine two events results in a single row rather than display in two rows ,ther... See more...
Thanks @ITWhisperer the above spath query which worked and was able to form a table view without duplicate. How can i combine two events results in a single row rather than display in two rows ,there is no common key to do stats by it has same source and index only the msg. is different 1.Currently uniqObjectIds,uniqueRetrievedIds are displayed in two rows in a table view,wanted as a single row 2.How to combine multiple event in a single query if there is no common key .   index= "" source IN ("") "uniqObjectIds" OR "data retrieved for Ids" | spath output=uniqObjectIds path=uniqObjectIds{} | spath output=uniqueRetrievedIds path=uniqueRetrievedIds{} | eval PST=_time-28800 | eval PST_TIME=strftime(PST, "%Y-%d-%m %H:%M:%S") | eval split_field= split(_raw, "Z\"}") | mvexpand split_field | rex field=split_field "objectIdsCount=(?<objectIdsCount>[^,]+)" | rex field=split_field "uniqObjectIdsCount=(?<uniqObjectIdsCount>[^,]+)" | rex field=split_field "recordsCount=(?<recordsCount>[^,]+)" | rex field=split_field "sqsSentCount=(?<sqsSentCount>[^,]+)" | table_time,PST_TIME,objectType,objectIdsCount,uniqObjectIdsCount,recordsCount,sqsSentCount,uniqObjectIds,uniqueRetrievedIds | sort _time desc     
<yoursearch> | evenstats count as total | streamstats count as current | where current<=0.15*total
I know that but with your solution I can only use integers such as 5,1,10 etc. I want to limit the results to a certain percentage of all possible results.
Hi @gcusello @ITWhisperer  I have same source and index for below two events first event: { [-]    awsRequestId:     hostname:     level: 30    msg: data retrieved for Ids    name:     pid: ... See more...
Hi @gcusello @ITWhisperer  I have same source and index for below two events first event: { [-]    awsRequestId:     hostname:     level: 30    msg: data retrieved for Ids    name:     pid: 8    time:     uniqueRetrievedIds: [ [-    275649    ]    v: 0 } second event: { [-]    awsRequestId:     hostname:     level: 30    msg: unique objectIds    name:     pid: 8    time:     uniqObjectIds: [ [-]      275649    ]    uniqObjectIdsCount: 1    v: 0 } There is no common key in these two events,but want to have in table view 1.Currently uniqObjectIds,uniqueRetrievedIds are displayed in two rows in a table view,wanted as a single row 2.How to combine multiple event in a single query if there is no common key .   index= "" source IN ("") "uniqObjectIds" OR "data retrieved for Ids" | spath output=uniqObjectIds path=uniqObjectIds{} | spath output=uniqueRetrievedIds path=uniqueRetrievedIds{} | eval PST=_time-28800 | eval PST_TIME=strftime(PST, "%Y-%d-%m %H:%M:%S") | eval split_field= split(_raw, "Z\"}") | mvexpand split_field | rex field=split_field "objectIdsCount=(?<objectIdsCount>[^,]+)" | rex field=split_field "uniqObjectIdsCount=(?<uniqObjectIdsCount>[^,]+)" | rex field=split_field "recordsCount=(?<recordsCount>[^,]+)" | rex field=split_field "sqsSentCount=(?<sqsSentCount>[^,]+)" | table_time,PST_TIME,objectType,objectIdsCount,uniqObjectIdsCount,recordsCount,sqsSentCount,uniqObjectIds,uniqueRetrievedIds | sort _time desc     
You can either use the top command (https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Top)    | top <your_field> limit=<your_choice>   OR you can use sort and the use head   ... See more...
You can either use the top command (https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Top)    | top <your_field> limit=<your_choice>   OR you can use sort and the use head   | sort - count | head <number_of_choice>    If this is inside a dashboard you could create a token based on the amount of search results and input it as the number for head or top command.
Hey Everyone, I currently have a dashboard the has two maps utilizing the "| geom geo_us_states featureIdField=State" and one maps utilizing cities, which i have the longitude and latitude for. For... See more...
Hey Everyone, I currently have a dashboard the has two maps utilizing the "| geom geo_us_states featureIdField=State" and one maps utilizing cities, which i have the longitude and latitude for. For the cities maps, i currently have it in markers mode. Is there a way that when i hover over the cities, or click on them, it can display the count and field name associated with the count? For example, A = 10 B=12 and C=9 For the states map, is there a way to group certain states together to form a region? for example, California, Nevada, and Oregon are the western region and have them colored a certain way. Or is there an app i can download that can help me achieve this. I appreciate all the help!
Yes. 1. You have to Load the Second lookup into your search. You do so by loading the lookup file with the inputlookup command.   |inputlookup fileB.csv   2. A lookup that is inside splunk can ... See more...
Yes. 1. You have to Load the Second lookup into your search. You do so by loading the lookup file with the inputlookup command.   |inputlookup fileB.csv   2. A lookup that is inside splunk can be used to add data onto existing events or table data. To do so you have to use the lookup command. You tell Splunk the name of the lookup, which field it shall use to add the data and which fields to add from the lookup   | lookup fileA.csv A OUTPUT E   Since field A are both in fileA and fileB you can use it to enrich your table with data from the other lookup. You tell splunk that you want to add data from fileA.csv and that the file that is present in both datasets is A then you tell Splunk to OUTPUT the field E to the current table. This results in a query like in my previous answer. When the correct fieldnames and lookup file names are used this should lead to your desired output.  
Add this after your time chart: | fieldformat AvgReqPerHour= round(AvgReqPerHour,2) If you don't want the rounding, look at floor or max for the behaviors you want.
Hello, Supposing you have a Search Head in Cloud, doing Federated Searches to other Search Heads on-prem, which is the compression ratio (if any)? I have found those useful information about compre... See more...
Hello, Supposing you have a Search Head in Cloud, doing Federated Searches to other Search Heads on-prem, which is the compression ratio (if any)? I have found those useful information about compression between forwarders and Indexers, but not between Search Heads.   https://community.splunk.com/t5/Getting-Data-In/What-kind-of-compression-is-used-between-forwarders-and-indexers/m-p/103239 https://community.splunk.com/t5/Getting-Data-In/Forwarder-Output-Compression-Ratio-what-is-the-expected/m-p/69899 Splunk Cloud Platform Service Details - Splunk Documentation   Thanks a lot, Edoardo