All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You can either use the top command (https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Top)    | top <your_field> limit=<your_choice>   OR you can use sort and the use head   ... See more...
You can either use the top command (https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Top)    | top <your_field> limit=<your_choice>   OR you can use sort and the use head   | sort - count | head <number_of_choice>    If this is inside a dashboard you could create a token based on the amount of search results and input it as the number for head or top command.
Hey Everyone, I currently have a dashboard the has two maps utilizing the "| geom geo_us_states featureIdField=State" and one maps utilizing cities, which i have the longitude and latitude for. For... See more...
Hey Everyone, I currently have a dashboard the has two maps utilizing the "| geom geo_us_states featureIdField=State" and one maps utilizing cities, which i have the longitude and latitude for. For the cities maps, i currently have it in markers mode. Is there a way that when i hover over the cities, or click on them, it can display the count and field name associated with the count? For example, A = 10 B=12 and C=9 For the states map, is there a way to group certain states together to form a region? for example, California, Nevada, and Oregon are the western region and have them colored a certain way. Or is there an app i can download that can help me achieve this. I appreciate all the help!
Yes. 1. You have to Load the Second lookup into your search. You do so by loading the lookup file with the inputlookup command.   |inputlookup fileB.csv   2. A lookup that is inside splunk can ... See more...
Yes. 1. You have to Load the Second lookup into your search. You do so by loading the lookup file with the inputlookup command.   |inputlookup fileB.csv   2. A lookup that is inside splunk can be used to add data onto existing events or table data. To do so you have to use the lookup command. You tell Splunk the name of the lookup, which field it shall use to add the data and which fields to add from the lookup   | lookup fileA.csv A OUTPUT E   Since field A are both in fileA and fileB you can use it to enrich your table with data from the other lookup. You tell splunk that you want to add data from fileA.csv and that the file that is present in both datasets is A then you tell Splunk to OUTPUT the field E to the current table. This results in a query like in my previous answer. When the correct fieldnames and lookup file names are used this should lead to your desired output.  
Add this after your time chart: | fieldformat AvgReqPerHour= round(AvgReqPerHour,2) If you don't want the rounding, look at floor or max for the behaviors you want.
Hello, Supposing you have a Search Head in Cloud, doing Federated Searches to other Search Heads on-prem, which is the compression ratio (if any)? I have found those useful information about compre... See more...
Hello, Supposing you have a Search Head in Cloud, doing Federated Searches to other Search Heads on-prem, which is the compression ratio (if any)? I have found those useful information about compression between forwarders and Indexers, but not between Search Heads.   https://community.splunk.com/t5/Getting-Data-In/What-kind-of-compression-is-used-between-forwarders-and-indexers/m-p/103239 https://community.splunk.com/t5/Getting-Data-In/Forwarder-Output-Compression-Ratio-what-is-the-expected/m-p/69899 Splunk Cloud Platform Service Details - Splunk Documentation   Thanks a lot, Edoardo
I've got a search query which outputs 175 rows. I want it to output only top 5%. The row count will change over time so I cannot set a fixed int value. It needs to be dynamic.
sorry i didnt get you  this is the file flow One fileA data looks: A E F 234 CAR 2 456 BUS 3   Second fileB data: A B C 234 MON 3 234 TUES 4 234 WED 5 2... See more...
sorry i didnt get you  this is the file flow One fileA data looks: A E F 234 CAR 2 456 BUS 3   Second fileB data: A B C 234 MON 3 234 TUES 4 234 WED 5 234 THUR 1 234 FRI 2 234 SAT 1 456 MON 3 456 TUES 4 456 WED 5 456 THUR 1 456 FRI 2 456 SAT 1 Final output be like : E A B C CAR 234 MON 3 CAR 234 TUES 4 CAR 234 WED 5 CAR 234 THUR 1 CAR 234 FRI 2 CAR 234 SAT 1 BUS 456 MON 3 BUS 456 TUES 4 BUS 456 WED 5 BUS 456 THUR 1 BUS 456 FRI 2 BUS 456 SAT 1
Thanks @ITWhisperer it worked
I am using Splunk 9.1.1
Hi @maede_yavari, each Search Head Cluster requires at least three nodes not two and a Deployer as you can read at https://docs.splunk.com/Documentation/Splunk/9.1.1/DistSearch/SHCarchitecture Ciao... See more...
Hi @maede_yavari, each Search Head Cluster requires at least three nodes not two and a Deployer as you can read at https://docs.splunk.com/Documentation/Splunk/9.1.1/DistSearch/SHCarchitecture Ciao. Giuseppe
 Check in the Monitoring Console -> Settings -> General Setup If all Roles are assigned correctly.  Is distributed mode turned on?
Which version of Splunk are you using as Studio is under active development and some improvements have been made.
Hello All, I'm receiving a warning from our InfoSec app that my data isn't CIM compliant.  We have FortiGate syslogs, Windows Domain Controller Security logs, and Carbon Black Cloud logs being sent ... See more...
Hello All, I'm receiving a warning from our InfoSec app that my data isn't CIM compliant.  We have FortiGate syslogs, Windows Domain Controller Security logs, and Carbon Black Cloud logs being sent to Splunk Cloud.   As far as I can tell, the logs being sent are CIM-compliant.  Is there anything else I can check?   Thanks, Doug
| inputlookup fileB.csv table A E F |lookup fileA.csv A OUTPUT E
how to join 2 lookup files to combine all the rows.  I used this query but not giving proper values and used join/append no use. | inputlookup fileA table A E F |join  [ inputlookup fileB.csv... See more...
how to join 2 lookup files to combine all the rows.  I used this query but not giving proper values and used join/append no use. | inputlookup fileA table A E F |join  [ inputlookup fileB.csv] table E A B C One file data looks: A E F 234 CAR 2 456 BUS 3 Second file data: A B C 234 MON 3 234 TUES 4 234 WED 5 234 THUR 1 234 FRI 2 234 SAT 1 456 MON 3 456 TUES 4 456 WED 5 456 THUR 1 456 FRI 2 456 SAT 1 Final output be like : E A B C CAR 234 MON 3 CAR 234 TUES 4 CAR 234 WED 5 CAR 234 THUR 1 CAR 234 FRI 2 CAR 234 SAT 1 BUS 456 MON 3 BUS 456 TUES 4 BUS 456 WED 5 BUS 456 THUR 1 BUS 456 FRI 2 BUS 456 SAT 1 Thanks in Advance..!!
Hi, we have deployed a search head cluster with two search head and one deployer. when we run : /opt/splunk/bin/splunk apply shcluster-bundle -target  https://sh1.example.com:8089 we get the follo... See more...
Hi, we have deployed a search head cluster with two search head and one deployer. when we run : /opt/splunk/bin/splunk apply shcluster-bundle -target  https://sh1.example.com:8089 we get the following message: Error when issuing rolling restart on the master: Internal Server Error{"messages":[{"type":"ERROR","text":"Rolling restart cannot be initiated without service_ready_flag = 1, check status through \"splunk show shcluster-status\". Reason :Waiting for 3 peers to register. (Number registered so far: 2)"}]} and when we run the following command: splunk show shcluster-status the initialized_flag equal 0.   Captain: dynamic_captain : 1 elected_captain : Wed Nov 8 13:05:58 2023 id : 84F622C1-C493-4A7C-B12B-D7EDBBCCD3A8 initialized_flag : 0 kvstore_maintenance_status : disabled label : sh2 mgmt_uri : https://sh2.example.com:8089 min_peers_joined_flag : 0 rolling_restart_flag : 0 service_ready_flag : 0 Members: sh2 label : sh2 mgmt_uri : https://sh2.example.com:8089 mgmt_uri_alias : https://sh2.example.com:8089 status : Up sh1-server label : sh1-server last_conf_replication : Pending mgmt_uri : https://sh1.example.com:8089 mgmt_uri_alias : https://sh1.example.com:8089 status : Up
I have the following search index=dsi_splunk host=dev_splunkmanager script=backup | stats earliest(_time) as debut by pid | convert timeformat="%d/%m/%Y %H:%M" ctime(debut)  The result is sorted a... See more...
I have the following search index=dsi_splunk host=dev_splunkmanager script=backup | stats earliest(_time) as debut by pid | convert timeformat="%d/%m/%Y %H:%M" ctime(debut)  The result is sorted as I need: I use this search as input for a dropdown on dashboard studio but de result is sorted differently: It seems the list is sorted by "value".   How may I have an unsorted dropdown to conserve the result order?   Thanks   
    |eval output3 = replace(output3,"[\[\]\"]","") |makemv output3 delim="," |mvexpand output3 |rename output3 as id |join id [<second_search>]   If you want to keep the original values of id fro... See more...
    |eval output3 = replace(output3,"[\[\]\"]","") |makemv output3 delim="," |mvexpand output3 |rename output3 as id |join id [<second_search>]   If you want to keep the original values of id from the first search add a temporary field:   |eval temp = replace(output3,"[\[\]\"]","") |makemv temp_id delim="," |mvexpand temp_id |rename temp_id as id |join id [<second_search>]   If you want to combine the results of the second query back together add this to the end:   | mvcombine id   Keep in mind that join only works with up to 50.000 events but it doesn't seem like this limitation is relevant to your situation based on the example.   If the second search is a static list of codes that you want to match you could also put the results of the second query in to a lookup table:   |eval temp = replace(output3,"[\[\]\"]","") |makemv temp_id delim="," |mvexpand temp_id |rename temp_id as id |lookup <lookup_name> id OUTPUT id as found |where isnotnull(found) |fields - found      
| rex "Duration: (?<duration>[\d\.]+) ms" | stats avg(duration) as avg min(duration) as min max(duration) as max
Rather than using json_extract(), try using spath | spath input=output1 path="data.affected_items{}.id{}" output=output3 | mvexpand output3 | table output3 The mvexpand will split the output3 field... See more...
Rather than using json_extract(), try using spath | spath input=output1 path="data.affected_items{}.id{}" output=output3 | mvexpand output3 | table output3 The mvexpand will split the output3 field across multiple events.