All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I've got a search query which outputs 175 rows. I want it to output only top 5%. The row count will change over time so I cannot set a fixed int value. It needs to be dynamic.
sorry i didnt get you  this is the file flow One fileA data looks: A E F 234 CAR 2 456 BUS 3   Second fileB data: A B C 234 MON 3 234 TUES 4 234 WED 5 2... See more...
sorry i didnt get you  this is the file flow One fileA data looks: A E F 234 CAR 2 456 BUS 3   Second fileB data: A B C 234 MON 3 234 TUES 4 234 WED 5 234 THUR 1 234 FRI 2 234 SAT 1 456 MON 3 456 TUES 4 456 WED 5 456 THUR 1 456 FRI 2 456 SAT 1 Final output be like : E A B C CAR 234 MON 3 CAR 234 TUES 4 CAR 234 WED 5 CAR 234 THUR 1 CAR 234 FRI 2 CAR 234 SAT 1 BUS 456 MON 3 BUS 456 TUES 4 BUS 456 WED 5 BUS 456 THUR 1 BUS 456 FRI 2 BUS 456 SAT 1
Thanks @ITWhisperer it worked
I am using Splunk 9.1.1
Hi @maede_yavari, each Search Head Cluster requires at least three nodes not two and a Deployer as you can read at https://docs.splunk.com/Documentation/Splunk/9.1.1/DistSearch/SHCarchitecture Ciao... See more...
Hi @maede_yavari, each Search Head Cluster requires at least three nodes not two and a Deployer as you can read at https://docs.splunk.com/Documentation/Splunk/9.1.1/DistSearch/SHCarchitecture Ciao. Giuseppe
 Check in the Monitoring Console -> Settings -> General Setup If all Roles are assigned correctly.  Is distributed mode turned on?
Which version of Splunk are you using as Studio is under active development and some improvements have been made.
Hello All, I'm receiving a warning from our InfoSec app that my data isn't CIM compliant.  We have FortiGate syslogs, Windows Domain Controller Security logs, and Carbon Black Cloud logs being sent ... See more...
Hello All, I'm receiving a warning from our InfoSec app that my data isn't CIM compliant.  We have FortiGate syslogs, Windows Domain Controller Security logs, and Carbon Black Cloud logs being sent to Splunk Cloud.   As far as I can tell, the logs being sent are CIM-compliant.  Is there anything else I can check?   Thanks, Doug
| inputlookup fileB.csv table A E F |lookup fileA.csv A OUTPUT E
how to join 2 lookup files to combine all the rows.  I used this query but not giving proper values and used join/append no use. | inputlookup fileA table A E F |join  [ inputlookup fileB.csv... See more...
how to join 2 lookup files to combine all the rows.  I used this query but not giving proper values and used join/append no use. | inputlookup fileA table A E F |join  [ inputlookup fileB.csv] table E A B C One file data looks: A E F 234 CAR 2 456 BUS 3 Second file data: A B C 234 MON 3 234 TUES 4 234 WED 5 234 THUR 1 234 FRI 2 234 SAT 1 456 MON 3 456 TUES 4 456 WED 5 456 THUR 1 456 FRI 2 456 SAT 1 Final output be like : E A B C CAR 234 MON 3 CAR 234 TUES 4 CAR 234 WED 5 CAR 234 THUR 1 CAR 234 FRI 2 CAR 234 SAT 1 BUS 456 MON 3 BUS 456 TUES 4 BUS 456 WED 5 BUS 456 THUR 1 BUS 456 FRI 2 BUS 456 SAT 1 Thanks in Advance..!!
Hi, we have deployed a search head cluster with two search head and one deployer. when we run : /opt/splunk/bin/splunk apply shcluster-bundle -target  https://sh1.example.com:8089 we get the follo... See more...
Hi, we have deployed a search head cluster with two search head and one deployer. when we run : /opt/splunk/bin/splunk apply shcluster-bundle -target  https://sh1.example.com:8089 we get the following message: Error when issuing rolling restart on the master: Internal Server Error{"messages":[{"type":"ERROR","text":"Rolling restart cannot be initiated without service_ready_flag = 1, check status through \"splunk show shcluster-status\". Reason :Waiting for 3 peers to register. (Number registered so far: 2)"}]} and when we run the following command: splunk show shcluster-status the initialized_flag equal 0.   Captain: dynamic_captain : 1 elected_captain : Wed Nov 8 13:05:58 2023 id : 84F622C1-C493-4A7C-B12B-D7EDBBCCD3A8 initialized_flag : 0 kvstore_maintenance_status : disabled label : sh2 mgmt_uri : https://sh2.example.com:8089 min_peers_joined_flag : 0 rolling_restart_flag : 0 service_ready_flag : 0 Members: sh2 label : sh2 mgmt_uri : https://sh2.example.com:8089 mgmt_uri_alias : https://sh2.example.com:8089 status : Up sh1-server label : sh1-server last_conf_replication : Pending mgmt_uri : https://sh1.example.com:8089 mgmt_uri_alias : https://sh1.example.com:8089 status : Up
I have the following search index=dsi_splunk host=dev_splunkmanager script=backup | stats earliest(_time) as debut by pid | convert timeformat="%d/%m/%Y %H:%M" ctime(debut)  The result is sorted a... See more...
I have the following search index=dsi_splunk host=dev_splunkmanager script=backup | stats earliest(_time) as debut by pid | convert timeformat="%d/%m/%Y %H:%M" ctime(debut)  The result is sorted as I need: I use this search as input for a dropdown on dashboard studio but de result is sorted differently: It seems the list is sorted by "value".   How may I have an unsorted dropdown to conserve the result order?   Thanks   
    |eval output3 = replace(output3,"[\[\]\"]","") |makemv output3 delim="," |mvexpand output3 |rename output3 as id |join id [<second_search>]   If you want to keep the original values of id fro... See more...
    |eval output3 = replace(output3,"[\[\]\"]","") |makemv output3 delim="," |mvexpand output3 |rename output3 as id |join id [<second_search>]   If you want to keep the original values of id from the first search add a temporary field:   |eval temp = replace(output3,"[\[\]\"]","") |makemv temp_id delim="," |mvexpand temp_id |rename temp_id as id |join id [<second_search>]   If you want to combine the results of the second query back together add this to the end:   | mvcombine id   Keep in mind that join only works with up to 50.000 events but it doesn't seem like this limitation is relevant to your situation based on the example.   If the second search is a static list of codes that you want to match you could also put the results of the second query in to a lookup table:   |eval temp = replace(output3,"[\[\]\"]","") |makemv temp_id delim="," |mvexpand temp_id |rename temp_id as id |lookup <lookup_name> id OUTPUT id as found |where isnotnull(found) |fields - found      
| rex "Duration: (?<duration>[\d\.]+) ms" | stats avg(duration) as avg min(duration) as min max(duration) as max
Rather than using json_extract(), try using spath | spath input=output1 path="data.affected_items{}.id{}" output=output3 | mvexpand output3 | table output3 The mvexpand will split the output3 field... See more...
Rather than using json_extract(), try using spath | spath input=output1 path="data.affected_items{}.id{}" output=output3 | mvexpand output3 | table output3 The mvexpand will split the output3 field across multiple events.
I am a beginner in Splunk queries. I might would be asking for some simple query but I am not able to construct it after searching a lot. Below is my sample event from message field    REPORT Reque... See more...
I am a beginner in Splunk queries. I might would be asking for some simple query but I am not able to construct it after searching a lot. Below is my sample event from message field    REPORT RequestId: 288f34e9-5572-4816-d21e-9fcf5965fad0 Duration: 206.64 ms ..   I can get all events matching this criteria, but I want to do average, min and max of value present in duration in millisecond. Any help on this would be appreciated.
The only way to reset the situation, is to manually edit the "etc/users/user/app/local/eventtypes.conf & tags.conf" "etc/apps/app/local/eventtypes.conf & tags.conf" "etc/apps/app/metadata/local.me... See more...
The only way to reset the situation, is to manually edit the "etc/users/user/app/local/eventtypes.conf & tags.conf" "etc/apps/app/local/eventtypes.conf & tags.conf" "etc/apps/app/metadata/local.meta" and delete the objects there. And restart the Splunkd. But if you are inside a cluster, it's not much comfortable
In case @PickleRick 's suggestion wasn't clear, you can do this: | makeresults count=5 | eval n=(random() % 10) | eval sourcetype="something" . n | fields - n | collect index=your_summary_index outp... See more...
In case @PickleRick 's suggestion wasn't clear, you can do this: | makeresults count=5 | eval n=(random() % 10) | eval sourcetype="something" . n | fields - n | collect index=your_summary_index output_format=hec  It will respect the sourcetype set, in this case a value between something0 to something9
Hi @richgalloway , Can we edit custom app props.conf using conf editor ?
The fact that you are using eval is expected but does not help identify where the problem is, please share your data (anonymised where appropriate).