All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am new to working without splunk agents/universal forwards for ingesting data into Splunk. I need to know how application can send data to Splunk indexer/HF, is there exact step provided.   ... See more...
Hi, I am new to working without splunk agents/universal forwards for ingesting data into Splunk. I need to know how application can send data to Splunk indexer/HF, is there exact step provided.   Would it via HEC or by TCP port. And how could users set this up in this way to continuously send data.   Thanks!
Hi I wanted to write a search that show all hosts that sends new since 24hrs into Splunk. The problem now is that I want to see in which index, these hosts deliver. For the first part I wrote the fo... See more...
Hi I wanted to write a search that show all hosts that sends new since 24hrs into Splunk. The problem now is that I want to see in which index, these hosts deliver. For the first part I wrote the following search.   | metadata type=hosts index=_* OR index=* | where firstTime >= relative_time(now(), "-24h") | convert timeformat="%Y-%m-%d %T" ctime(firstTime) as firstTime, ctime(lastTime) as lastTime, ctime(recentTime) as recentTime | search host!="*_*" | table host, firstTime, recentTime | join [|tstats latest(_time) as firsttwoTime where (index=* OR index=_*) by host, index | table index, host] | table index, host, firstTime, recentTime   The problem now is that this search is really slow, is there any other search that would be more efficient?
Hola splunker.   i performed a search using two indexes, but these tow indexes have different fields that uses the same field name, for example: EmailServer: has the filed name message_subject Em... See more...
Hola splunker.   i performed a search using two indexes, but these tow indexes have different fields that uses the same field name, for example: EmailServer: has the filed name message_subject EmailProxy: has the filed name message_subject   i want to search using the  message_subject from the EmailServer   index=EmailServer OR index=EmailProxy NOT (src_ip=10.0.0.0/8 OR src_ip=192.168.0.0/16 OR src_ip=172.16.0.0/12 ) | table src_ip sender EmailServer.message_subject     Thanks ^_^^
I tried to install splunk universal forwarder, but i used the wrong credentials. My plan is to install it and do it the right way once again.   Know anyone a better case to correct the credentials?
Hello, I have an excel file like this : And I wanna do this on splunk, but I can't / don't know how to do it My request :   index=centreon host="xxxxxx" | bucket _time span=1d | convert ctim... See more...
Hello, I have an excel file like this : And I wanna do this on splunk, but I can't / don't know how to do it My request :   index=centreon host="xxxxxx" | bucket _time span=1d | convert ctime(_time) AS date timeformat="%Y/%m/%d" | contingency host date usetotal=false | appendcols [search index=centreon host="xxxx" | bucket _time span=1d | convert ctime(_time) AS date timeformat="%Y/%m/%d" | stats avg(_raw) AS AVG by host | stats stdev(_raw) AS STDEV by host | eval ratio=(stdev/avg) | fields avg,stdev,ratio]   I'm sure my research is bad, someone could help me ? Thank's
Hola Splunkers !!   i want to search in two indexes with one common values in between, for exapmle:   index=Exchange_server has the following fields: sender, subject index=EmailProxy has the fol... See more...
Hola Splunkers !!   i want to search in two indexes with one common values in between, for exapmle:   index=Exchange_server has the following fields: sender, subject index=EmailProxy has the following fields: src_ip, sender   where the sender value is the same in the two indexes      i want the output to conclude: src_ip, SenderMail,  Subject   here's my search: index=Exchange_server OR index=EmailProxy | table src_ip message_subjec sender     but unfortunately i got many blank fields, please help me with it.     Thanks^_^      
I am currently using | streamstats count as index by success_rate, but it doesn't work.   What i want: success_rate/Index pass/0 pass/1 pass/2 fail/0 fail/1 pass/0 pass/1   What i am gett... See more...
I am currently using | streamstats count as index by success_rate, but it doesn't work.   What i want: success_rate/Index pass/0 pass/1 pass/2 fail/0 fail/1 pass/0 pass/1   What i am getting: success_rate/Index pass/0 pass/1 pass/2 fail/0 fail/1 pass/3 pass/4   As can be seen above, when "pass" occurs again, the counter continues from the previous pass.   pls help! thank you!!
I've read in other posts that using join in Splunk isn't great so I'm looking for a better way to do my search. I want a table of users connected to the company VPN, who are not using a corporate de... See more...
I've read in other posts that using join in Splunk isn't great so I'm looking for a better way to do my search. I want a table of users connected to the company VPN, who are not using a corporate device and who are not contractors. The first join is to find non-corporate devices and the second join is to find users who are not contractors. Currently the search looks something like this:   index=firewall vpn_connection=success | dedup device_name | table device_name, user, src_ip | join type=outer left=vpn right=AD where vpn.device_name=AD.name [| inputlookup AD_Computer_LDAP_list | table name] | where isnull('AD.name') | table vpn.device_name, vpn.user, vpn.src_IP | rename vpn.user as user, vpn.device as device | join type=left left=connected right=contractor where connected.user=contractor.user [| inputlookup AD_User_LDAP_list | where like(memberOf, "%contractor%") | eval user=lower(sAMAccountName) | table user]  | where isnull('contractor.user') | table connected.device, connected.user, connected.src_IP   Any way to avoid using joins and to simplify this would much appreciated!
WSS input is unresponsive. A) getting socket errors when connnecting to localhost scwss-poll  B) submitting input XML form with input name/credentials to API - not working - throwing error fr... See more...
WSS input is unresponsive. A) getting socket errors when connnecting to localhost scwss-poll  B) submitting input XML form with input name/credentials to API - not working - throwing error from splunkd as unresponsive.    
Hi, Strange behavior with Automatic lookup (same with manual lookup). I have csv file that contains codes, example: 1 - LOGIN 2 - FAILURE 11 - CERTIFICATE 12 - SOMETHING ...   I have lookup ... See more...
Hi, Strange behavior with Automatic lookup (same with manual lookup). I have csv file that contains codes, example: 1 - LOGIN 2 - FAILURE 11 - CERTIFICATE 12 - SOMETHING ...   I have lookup LOOKUP-event_code_action_lookup = event_code_action_lookup event_code AS EventCode OUTPUT event_code_action AS EventCodeAction when I got results I have multiple EventCodeAction statistics, basically. Events x2 EventCodeAction count? Why did I doubled EventCodeAction in results? Original data is json that is parsed INDEXED_EXTRACTIONS = json KV_MODE = json LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = 1 SHOULD_LINEMERGE = 0 TIMESTAMP_FIELDS = @timestamp TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%Q category = Structured description = JavaScript Object Notation format. For more information, visit http://json.org/ pulldown_type = 1   Also, is it possible to add EventCodeAction in "original json" as field, so it's not only visible on the left sides where the fields are?
Splunk me demande de connecter mon Mac book pro que je n'arrive plus a connecter après formatage du PC et oubli de l'adresse et mot de passe???
Dear fellows, I have two logs and i am looking to do some correlation between them. In the log1, i am looking for IP_x (ex: 2.2.2.2)associated with IP_1 (1.1.1.1), then reuse the value of IP_x (2.2... See more...
Dear fellows, I have two logs and i am looking to do some correlation between them. In the log1, i am looking for IP_x (ex: 2.2.2.2)associated with IP_1 (1.1.1.1), then reuse the value of IP_x (2.2.2.2) in another search. When i execute, i got nothing. index=* sourcetype=log1 [search index=* sourcetype=log2  src_ip="1.1.1.1"  | rex field=_raw "src-ip (?<src-ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})" | eval src_ip =src-ip | table src_ip ] src_ip if i do this, i got the details index=* sourcetype=log1 2.2.2.2 when i execute manually the search , i got the table with 2.2.2.2 search index=* sourcetype=log2  src_ip="1.1.1.1"  | rex field=_raw "src-ip (?<src-ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})" | eval src_ip =src-ip | table src_ip Any helps will be welcomed
Hi All, i have a dashboard with loading a loadjoab, I would like to add a button to stop loading data. is it possible to do this? Simone
Hi, I have a list of values as shown below  from the above picture data I wanted to pick the average of each column's values and the latest value of that column also. I am able to successfully t... See more...
Hi, I have a list of values as shown below  from the above picture data I wanted to pick the average of each column's values and the latest value of that column also. I am able to successfully take the average value but unable to print the latest value. Please help.   index=nextgen sourcetype=lighthouse_json sourcetype=lighthouse_json datasource=webpagetest step="Homepage" url="*.bundle" | chart avg(objectSize) as average, tail 1 as latest by url   I gave "tail 1" to get the average value but the query is giving an error while executing it.   Thanks,  
Team, We are using splunk Splunk_TA_microsoft-cloudservices V4.0.1 for reading logs from blob storage , the app works fine and reads the data from blob , when we store the file under the container w... See more...
Team, We are using splunk Splunk_TA_microsoft-cloudservices V4.0.1 for reading logs from blob storage , the app works fine and reads the data from blob , when we store the file under the container without sub directories. But when we store the files under multiple sub directories the app fails to load the files from the exact location, this is because the blob path was not encoded hence, it was not able to search and load the respective files. Could you please fix this error in "mscs_storage_blob_dispatcher.py" and do quick release. example : resourceId=/SUBSCRIPTIONS/7736B8F1-6136-4ECB-89B5-3DDF8C225441/RESOURCEGROUPS/ABAKASG-WAF-AAC/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/ABAKASG-GW-AAC/y=2021/m=05/d=25/h=01/m=00/PT1H.json' fix :resourceId%3D/SUBSCRIPTIONS/7736B8F1-6136-4ECB-89B5-3DDF8C225441/RESOURCEGROUPS/ABAKASG-WAF-AAC/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/ABAKASG-GW-AAC/y%3D2021/m%3D05/d%3D26/h%3D12/m%3D00/PT1H.json thanks karthik
Hi, Bit out of my depth here but I have done an eval so we divide the events in the index by the URLs and I have 4 categories. Each category has a different response_time threshold and the search ul... See more...
Hi, Bit out of my depth here but I have done an eval so we divide the events in the index by the URLs and I have 4 categories. Each category has a different response_time threshold and the search ultimately will calculate how many events in each category fall into the acceptable range of that threshold.  How do I do this?  I thought of maybe doing a total count by category but then I have no idea how to do a search within the same search on whether the events within the categories fall into the unique threshold parameters. Do I need to do subsearches?
Our Splunk architecture is like Two HFs pointing to Two internal Indexers and Two external Indexers. Internal Indexers have different data and external indexers have different data (these indexers ... See more...
Our Splunk architecture is like Two HFs pointing to Two internal Indexers and Two external Indexers. Internal Indexers have different data and external indexers have different data (these indexers also receive data from other external HFs too) and HF's route the data correctly into the respective indexers. We had a situation with fail over script which removed the outputs.conf file in both the HFs which resulted indexing the data locally into the HF and able to search this data in HFs but didn't go to indexers to search from search heads. After we putting back the outputs.conf file the new data is going into the right indexers as intended but the data between is indexed into HF and lost in the indexers. How can I re-ingest this data that's indexed in the HF into the indexers using the correct config. I tried renaming the fish buckets folder and checked if that re-ingests the data but it only ingested small amount of data not everything. I can still see data in my HF under $Splunk_home/var/lib/splunk/<index_name>/db/_raw  What's the best way to re-ingest this data without manually moving the files into the indexers. Thanks
There are two main Checkpoint Firewall add-ons available and I am unsure which one to go by. Our checkpoint firwall is R77.30 Checkpoint addon by Splunk this is by Splunk was last updated on April... See more...
There are two main Checkpoint Firewall add-ons available and I am unsure which one to go by. Our checkpoint firwall is R77.30 Checkpoint addon by Splunk this is by Splunk was last updated on April 2021 Splunk addon only supports - Check Point Software R81, Check Point Endpoint client version E84.30, Check Point Management server version: R80.40 supported by Splunk Checkpoint addon by Checkpoint  last updated Jan 2020 Supports all versions supported by Checkpoint   Can someone please advise which one should I go with ?
Evening Splunk Community, I'd like to better understand the consequences of destroying a single indexer peer within my indexer cluster. To make a long story short, while resizing the root partition ... See more...
Evening Splunk Community, I'd like to better understand the consequences of destroying a single indexer peer within my indexer cluster. To make a long story short, while resizing the root partition on one of my indexers I managed to mangle the partition, and the system will no longer boot. Prior to mangling the effected indexer I did offline the peer by executing temporary indexer shutdown command below.     splunk offline       Once it was evident I wasn't going to be able to save the affected partition, I decided to build a new indexer, remove the mangled indexer from my cluster, and join the new replacement indexer into the cluster. I removed the affected indexer from my cluster by executing:     splunk remove cluster-peers -peers <guid>       What I would like to understand is if I've managed to destroy any data in my cluster, or the next steps I need to take to bring my cluster back up to full speed. My cluster consists of six indexing peers with a replication factor of 3, and a search factor of 2. Which leads me to believe that my other indexers contain a replica copy of the data I potentially destroyed on the affected indexer I managed to mangle. Is this true? I believe the only thing left to do now is to perform a data re-balance to equalize the storage utilization across my indexer peers. 
I've performed a stats by command I was wondering if there was a way to store all these as fields and then for the by field which has returned 0 make it null. For context I performed an eval field t... See more...
I've performed a stats by command I was wondering if there was a way to store all these as fields and then for the by field which has returned 0 make it null. For context I performed an eval field to create a new field on via case then performed a stats by command. stats command: stats avg(response_time) by category