All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have two Splunk queries, each of which uses the _rex command to extract the join field. Example:       QUERY 1 index=index1 "Query1" | rex field=_raw "abc(?<MY_JOIN_FIELD>def)" QUERY 2 i... See more...
I have two Splunk queries, each of which uses the _rex command to extract the join field. Example:       QUERY 1 index=index1 "Query1" | rex field=_raw "abc(?<MY_JOIN_FIELD>def)" QUERY 2 index=index2 "Query2" | rex field=_raw "ghi(?<MY_JOIN_FIELD>jkl)"       I want to use the Transaction command to correlate these two queries, but I can't figure out how to do it. Thanks! Jonathan
I have come across a unique issue with the MS Cloud Services app when assigning an input for a single storage table. The API works as expected when I initially set the input with a Start Time. Howeve... See more...
I have come across a unique issue with the MS Cloud Services app when assigning an input for a single storage table. The API works as expected when I initially set the input with a Start Time. However, the data does not continue to collect/ingest beyond the timestamp when the input is configured, unless there is manually intervention/manipulation on my part to the input Start Time. Example: If I set a Start Time to 2 weeks prior and save the input, it will collect data from the Storage Table at 2 weeks prior up to the time the input is saved in Splunk. The input will generate 0 results after that time. I checked in Azure Storage Explorer and the Table in question continues to write new entries in the same format. I have confirmed Splunk can see that data because it will start collecting new data only after I manually update the Start Time in the input. I checked the mscs:storage:table:log and there are no errors with the API input functionally and it shows attempts at the designated interval(5 minutes). Historically with this input, I've had success by leaving the Start Time to the default(30 days) and setting the table list to * to collect everything. However, this table is part of a very large blob that cannot be pulled in the same fashion. I'm hoping to get some ideas about what could be causing this break in log collection and see if there is something I may be overlooking. Any input would be greatly appreciated.
I'm trying to do my own "poor man's certificate check" Ideally I'd like to pick up from the config (btool output) the paths to certs so I could check them with openssl CLI tool. I don't want to ... See more...
I'm trying to do my own "poor man's certificate check" Ideally I'd like to pick up from the config (btool output) the paths to certs so I could check them with openssl CLI tool. I don't want to do any python modular input stuff for that since I want it to run as a simple script on any machine with UF. The question therefore is where should I get my certs from. serverCert, RootCA, clientCert, sslRootCAPath entries in inputs.conf, outputs.conf, servers.conf, deploymentclients.conf (of course they don't have to be defined in each file). For now I assume the "new" configuration format with a single pem. Any files that I forgot? Any more entries I missed?  
I have tried reassigning the orphaned search to the new owner, but couldn't able to fix it. I am getting the error message as "couldn't find the object ID". what it means? The search is shared global... See more...
I have tried reassigning the orphaned search to the new owner, but couldn't able to fix it. I am getting the error message as "couldn't find the object ID". what it means? The search is shared globally. Will recreating the old owner and deactivating the old owner account after reassigning to new owner will clear the warnings or else suggest me the way to get rid of this
Hello all In our environment some universal forwarders are not reporting to Splunk cloud. When I tried to view forwarder log file i.e. splunkd.log I found that for past one week no log was present ... See more...
Hello all In our environment some universal forwarders are not reporting to Splunk cloud. When I tried to view forwarder log file i.e. splunkd.log I found that for past one week no log was present in the file. What maybe the reason? Is it related to forwarder not sending logs to Splunk index?   Thank you
Hi to all, I have three machines: 1 deployment-server, 1 SH/Indexer and 1 forwarder. Looking at "monitoring console-panoramics" on deployment-server, i don't see the correct configuration (is avai... See more...
Hi to all, I have three machines: 1 deployment-server, 1 SH/Indexer and 1 forwarder. Looking at "monitoring console-panoramics" on deployment-server, i don't see the correct configuration (is available only deployment server, SH/Indexer and forwarder are not visible). The data arrives correctly in the index and in "forwarder management" I see correctly the forwarder client. Finally, the lookup "dmc_forwarder_assets" is empty. Can someone help me please? Thanks. 
Hi, I need to use the Event Timeline Viz to show a timeline of the the different URLs been hit over time. This is the first time I used this visualization and I am struggling. At the moment, ... See more...
Hi, I need to use the Event Timeline Viz to show a timeline of the the different URLs been hit over time. This is the first time I used this visualization and I am struggling. At the moment, this is all I have outputted:   And here is the XML for the panel: Can you please please help as it has been 4 days now since I started wit this diagram ....   Many thanks, Patrick
Hello dears, How can i change timechart _time axis y to x ? <base search> | timechart span=1h sum(REQUESTNAME) as Sikayet count by ilce |sort -count | untable _time Xaxis Yaxis |where Yaxis > 3 ... See more...
Hello dears, How can i change timechart _time axis y to x ? <base search> | timechart span=1h sum(REQUESTNAME) as Sikayet count by ilce |sort -count | untable _time Xaxis Yaxis |where Yaxis > 3   Regards
Hello, I am trying to integrate Splunk (on premise) with DUO ( the DUO application is in cloud) , but I am receiving an error. The error I am receiving is after I am entering  the integration key,s... See more...
Hello, I am trying to integrate Splunk (on premise) with DUO ( the DUO application is in cloud) , but I am receiving an error. The error I am receiving is after I am entering  the integration key,secret key and the API hostname in the Duo Security Add-on which I have installed it on my SH. The integration key,secret key and the API hostname I am having is after I have configured an ADMIN Api on the DUO Applications.( I need to mention that I gave my application the following permissions: Grant read information , Grant read log and Grant read resource). I have also tried the solution explained in the following article, to create the inputs.conf directly , but without success (Solved: DUO Splunk Connector: Error "Validation for scheme... - Splunk Community). The Error :Encountered the following error while trying to save: The provided admin API credentials cannot get the necessary logs. Please verify that the Admin API settings are correctly configured.   Can anyone help me ? Many thanks, Dragos  
This is kind of open ended, but essentially I'm looking for things that you view as bad config, or at least configuration settings that should be flagged for review. Some ideas I've had so far: ... See more...
This is kind of open ended, but essentially I'm looking for things that you view as bad config, or at least configuration settings that should be flagged for review. Some ideas I've had so far: - Indexes with a very short retention period (100 seconds or the like) - Searches with `index=*` in them - A deployment server targeturi that doesn't match the name of your actual DS What other sorts of config would you flag as concerning? Do you have any automated checks for anything like this in house?
Good day all, I come to seek guidance from the experts My team and I have been tasked with creating an alert that will capture hosts that start a Windows AV scan (EventCode=1000) on a Friday and... See more...
Good day all, I come to seek guidance from the experts My team and I have been tasked with creating an alert that will capture hosts that start a Windows AV scan (EventCode=1000) on a Friday and don't complete by Monday. These long running scans are causing issues in the environment and we are hoping to tackle them before the start of business on Monday. The hosts log EventCode=1001 OR EventCode=1002 when they have stopped their scan. We have attempted to put together a couple queries, one using a subsearch that grabs all hosts who have logged EventCode=1000 that is piped into an outer search that does a NOT EventCode=1001 OR EventCode=1002 and the second using the transaction command with the following syntax:   <base search> | transaction maxspan=3d startswith=EventCode="1000" endswith=(EventCode="1001" OR EventCode="1002") keeporphans=true | where _txn_orphan=1 | stats count by ComputerName   but get no results. I do know that the transaction command is a hog and is generally recommended against. I wanted to ask the collective any thoughts or ideas on this to see the best practice for this type of search. I have read a couple posts using streamstats but I'm not sure if this is the best route for this specific example here As always, it is greatly appreciated. 
Hello there, When we add business transaction availability to the dashboard, it calculates incorrectly. When I calculate the incoming values, different results are obtained in the dashboard. It... See more...
Hello there, When we add business transaction availability to the dashboard, it calculates incorrectly. When I calculate the incoming values, different results are obtained in the dashboard. It gives the results: 100-(({CallperMin}-{ErrorperMin}) / {CallperMin}) as an example (717-{ErrorperMin}) / 717 Result -0.17% If the error is 0, it gives a true result; if it is 1, it gives a false result. In addition, how can I show the error rate shown in BT on the dashboard?
While trying to login on controller for first time, all users in account are getting login failed message. Though, users are able to login on Account Management UI. Kindly help resolve the issue. T... See more...
While trying to login on controller for first time, all users in account are getting login failed message. Though, users are able to login on Account Management UI. Kindly help resolve the issue. Thanks
Hi How can I monitor java applications with splunk, I try nmon but it only give whole java process, not specific pid! Any idea? Thanks,
Hi All, We are using Splunk Cloud and have a Universal Forwarder setup on a windows machine - it reads CSV files from a particular folder and sends to indexer.  inputs.conf:       [monito... See more...
Hi All, We are using Splunk Cloud and have a Universal Forwarder setup on a windows machine - it reads CSV files from a particular folder and sends to indexer.  inputs.conf:       [monitor://D:\Test\Monitoring\Abc] disabled=0 index=indexabc sourcetype = csv crcSalt = <SOURCE>       props.conf:       [source::D:\Test\Monitoring\Abc\*.csv] CHECK_METHOD = modtime       various CSV files are being placed under D:\Test\Monitoring\Abc hourly/daily and this setup works without any issues most of the times for all the CSV files. but there are some instances where data from a single file for a particular hour/day is missing in the index "indexAbc" - this doesn't happen with a particular file but various files. for example, there is a CSV called memory.csv which updates daily at 23:47 and when I checked data for the previous month (timechart span=1d), it doesn't show data for 25th March - I have checked the 3rd party script which sends data to this windows server and it has done that successfully. when a CSV file is read and indexed, i see below entry in the splunkd.log but this is not available for the 25th march for which the data is missing:       03-26-2022 23:47:49.495 +0000 INFO WatchedFile [6952 tailreader0] - Will begin reading at offset=0 for file='D:\Test\Monitoring\Abc\memory.csv'.        for period 25th March 23:40 to 23:50, I have checked splunkd error in _internal index and the results are given below: Can you please suggest what could be causing this intermittent issue and whet troubleshooting steps I can follow? Thank you.
Hi, Can I create Dashboard like this with checkbox filtering? ^ Post edited by @Ryan.Paredez for an improved title. 
I encountered a problem when opening the Microsoft O365 Email Add-on for Splunk. It is developer supported but the developer didn't reply.  When I open the add-on, it keeps loading forever. It is th... See more...
I encountered a problem when opening the Microsoft O365 Email Add-on for Splunk. It is developer supported but the developer didn't reply.  When I open the add-on, it keeps loading forever. It is the same for all three tabs, Inputs, Configurations, and Search.       I have tested the add-on on a on-premises Splunk and it worked perfectly. Is it a problem specific to Splunk Cloud? What could we do solve this problem? Thank you.
hi everyone,   could you please help me with below query. i want to create Custom alert action and  send results as Excelsheet via email. does anyone happens to know similar app and compatibl... See more...
hi everyone,   could you please help me with below query. i want to create Custom alert action and  send results as Excelsheet via email. does anyone happens to know similar app and compatible with splunk 8.2.   thanks for your support.
Dear All, I'm writing, regarding, the submit button, functionality (in Dashboard Studio) As you can see, in the image, we currently have a dashboard (with, a few inputs (and, a submit button)) ... See more...
Dear All, I'm writing, regarding, the submit button, functionality (in Dashboard Studio) As you can see, in the image, we currently have a dashboard (with, a few inputs (and, a submit button)) Now, this button, perfectly, does, its job (it allows, the queries, to be loaded, only, when the button is clicked (which, is,  exactly, what we intended, by adding, the button, to the dashboard ?)) Now, our wish, would be: we would like, to have, the submit button, clicked, by default (as you may probably imagine; this would be just so that the user does not have to click on the button Don't know if maybe this is something possible Thanks a lot! Sincerely, Francisco
My logs are in the format:   My-Application Log: Some-Key= 99, SomeOtherKey= 231, SomeOtherKey2= 1231, Some Different Key= 0, Another Key= 121   I currently use query: index="myindex" "My-Applic... See more...
My logs are in the format:   My-Application Log: Some-Key= 99, SomeOtherKey= 231, SomeOtherKey2= 1231, Some Different Key= 0, Another Key= 121   I currently use query: index="myindex" "My-Application Log:" | extract pairdelim=",  " kvdelim="= " | table Some-Key  SomeOtherKey SomeOtherKey2 "Some Different Key" "Another Key"   It is able to extract events however the table is filled with blank/null values.   How can i visualise the data if i have this format of logs. I have to group by Some-key. Example visualization should be grouped basis Some-key Thanks in advance.