All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I need to get pictures into the dashboard based on key value (id in the URL below) which differs for each case. The picture is located on server having the following kind of URL http://server.a... See more...
Hi, I need to get pictures into the dashboard based on key value (id in the URL below) which differs for each case. The picture is located on server having the following kind of URL http://server.abc.com/script.py/get_image?id=123456789&png=on The problem is that Splunk dashboard XML doesn't allow to use the "&" sign. I've converted it to %26. Now the problem is that the server (server.abc.com) doesn't understand the converted version of & Has someone solution for this? Thank you very much in advance
I have a question, in microservice based platform where are getting several logs for the different application. Each application tracks unique transactions via a id, either a CorrelationId, SessionId... See more...
I have a question, in microservice based platform where are getting several logs for the different application. Each application tracks unique transactions via a id, either a CorrelationId, SessionId, transactionid I want to be able to put this is a lookup application.csv file and use it for same dashboard so my lookup will look like Application SourceLogs Unique_Identifier App1 Application1.logs CorrelationId App2 Application2.logs SessionId App3 Application3.logs TransactionId I have created a input where the user can select the Application via tkn_app index=application_logs | lookup application.csv SourceLogs as source | search Application=$tkn_app$ | bin span=5m _time | stats dc(Unique_Identifier) AS TPS by _time however this searches for Correlationid , SessionId and TransactionId and not the actual values, how to I make it so Unique_Identfier searches for the right metadata   Note the logs are in json format, so the fields Correlationid , SessionId and TransactionId are autodetected by Splunk
Hi @gcusello , A user is complaining about the following error. Can you please me with this. "Your search has been queued: Your maximum disk usage quota has been reached...Use the Job Manager to de... See more...
Hi @gcusello , A user is complaining about the following error. Can you please me with this. "Your search has been queued: Your maximum disk usage quota has been reached...Use the Job Manager to delete some of your saved search results." Regards, Rahul Gupta
Hi everyone,   See if someone could give me a hand. My scenario is similar to this: Table 1 ID ID2 Whatever rest columns... 1 AA ... 2 FC ... 3 OM ... 1 BB ... 1 MQ ... See more...
Hi everyone,   See if someone could give me a hand. My scenario is similar to this: Table 1 ID ID2 Whatever rest columns... 1 AA ... 2 FC ... 3 OM ... 1 BB ... 1 MQ ...   Table 2 ID ID2 Whatever rest columns... 1 AA, BB ... 2 FC ... 3 OM ... 4 BB ... 5 MQ ...   You see that I have two identifiers. The first table we could say is a collection of logs, while the second one is the most updated inventory. What I would like to do is perform an 'inner join' so I only get the rows from Table 1 whose ID1 and ID2 exists in Table 2. Resulting in this: ID ID2 Whatever rest columns... 1 AA ... 2 FC ... 3 OM ... 1 BB ...   The row with ID 1 and ID2 MQ will be removed.   Thanks for your help
I'm attempting to follow along with a Splunk Fundamentals training which requires me to upload a few files (csv, linux_secure & a DB file) & then navigate to the Search & Reporting app to make a few ... See more...
I'm attempting to follow along with a Splunk Fundamentals training which requires me to upload a few files (csv, linux_secure & a DB file) & then navigate to the Search & Reporting app to make a few queries on the data. I'm uploading the files correctly since I'm following along with the steps and it gives me successful upload prompts, note that I'm just uploading files from my computer so its very straightforward. Once I navigate to the Search & Reporting app I expect to be able to view the global stats in the "What to Search" box however in my instance I don't see that section at all. After I make a basic query it returns me with zero results. I'm selecting "All time" when I search - please let me know what the issue is since I know it can't be anything with the installation since I'm just using a Splunk Cloud instance. Please let me know if I'm missing something since I would've assumed this a pretty straightforward exercise.  
I want to split row into multiple row by spliting it under the same column. Example:- col1     col2     col3     col4 A,a        Z,z        B,b        X,x P,p                       C,c       Y,y ... See more...
I want to split row into multiple row by spliting it under the same column. Example:- col1     col2     col3     col4 A,a        Z,z        B,b        X,x P,p                       C,c       Y,y V,v In the above example A,a P,p V,v is in the same row but I want to have it in differet row under column col1.   
Hi, I have below log files under path /path/to/app/ usera-x.log userb-x.log userc-x.log userd-y.log usere-y.log userf-z.log userg-z.log . . etc To extract *-x.log i am using below inputs.... See more...
Hi, I have below log files under path /path/to/app/ usera-x.log userb-x.log userc-x.log userd-y.log usere-y.log userf-z.log userg-z.log . . etc To extract *-x.log i am using below inputs.conf, but the data isnt being indexed into splunk. Is there any issue with my inputs.conf [monitor://E:\path\to\app\*-x.log] disabled = 0 index = test sourcetype = metric
Referring to this  question (Not all Splunk cookies have the HttpOnly tag set) , answered by @anaidu_splunk , I can see that some of the cookie couldn't be set with httponly tag due to it's usage on ... See more...
Referring to this  question (Not all Splunk cookies have the HttpOnly tag set) , answered by @anaidu_splunk , I can see that some of the cookie couldn't be set with httponly tag due to it's usage on the scripting elements, so setting them as httponly would break the web interface functionality. I would like to get information on the splunkweb_uid cookies that was also didn't have the httponly tag. Can someone help to verify that this cookie doesn't contain any secure information that could be exploit by third party. Below are the screenshot from my splunk portal with the cookies information for reference :-  
Hi,  There is the description for INDEXED_VALUE in fields.conf INDEXED_VALUE = [true|false|<sed-cmd>|<simple-substitution-string>] * Set this to true if the value is in the raw text of the event. *... See more...
Hi,  There is the description for INDEXED_VALUE in fields.conf INDEXED_VALUE = [true|false|<sed-cmd>|<simple-substitution-string>] * Set this to true if the value is in the raw text of the event. * Set this to false if the value is not in the raw text of the event. * Setting this to true expands any search for key=value into a search of value AND key=value (since value is indexed). * NOTE: You only need to set indexed_value if indexed = false. INDEXED_VALUE is used when indexed = false according to the description. Then, when is the option INDEXED_VALUE used? Which circumstances require this option? Is there a case where only value is indexed and key(field) is not indexed? The description makes me confused.. Hope anyone help me out. Thanks a lot.
Hi all, Ill try and keep it short and to the point.  We have a standalone search head that is currently connected to an index cluster with 4 peers. We would now like to connect a second 3 peer inde... See more...
Hi all, Ill try and keep it short and to the point.  We have a standalone search head that is currently connected to an index cluster with 4 peers. We would now like to connect a second 3 peer index cluster that is hosted in AWS. When I add the AWS cluster master to the search head via Settings -> Indexer Clustering it actually fails to connect due to the below error: Master has multisite enabled but the search head is missing the 'multisite' attribute' but if I configure in the server.conf file and reboot, the AWS cluster master connects fine but the 3 peers do not appear as per below screenshot and I am not able to search the indexes.     If I manually add the index peers under Settings -> Distributed Search -> New Search Peer, the peers add fine and I am able to search indexes in AWS as required.  I need the peers to be discovered automatically by the search head via the cluster master as the AWS indexers are rebuilt on a regular basis. Below is the server.conf on our search head and I have been informed that autodiscovery is enabled on the AWS Cluster master. I have logged a case with Splunk but thought I would try here as well. Any information would be appreciated Thanks   Trev
Hi, We are planning to move our Splunk environment to our Nutanix infrastructure. We expect our collected logs to be 20-30 GB/Day and Splunk is mainly used as a SIEM solutions where around 4 users a... See more...
Hi, We are planning to move our Splunk environment to our Nutanix infrastructure. We expect our collected logs to be 20-30 GB/Day and Splunk is mainly used as a SIEM solutions where around 4 users are accessing concurrently We had some internal discussions, and I wanted to understand if we can use less resources than the mentioned below to run Splunk+ES, and if any one is running a similar setup can share the used hardware specs Search head 24vCPU, 32GB ES search head 24vCPU, 32GB Indexer 24vCPU, 32GB License + Deployment 12vCPU, 16GB Thanks
Hi  So I have an application which show organization level data which have around 9-10 dashboards In which I First have executive dashboard which gives an overall overview to the upper management wh... See more...
Hi  So I have an application which show organization level data which have around 9-10 dashboards In which I First have executive dashboard which gives an overall overview to the upper management what is happening and followed by other dashboards which have different KPI's and metrics . So we have placed a set of filters if for some TL/Manager they can select and check data with respect to there team  but when they move to other dashboard they again need to select filter is there any way we can resolve this issue that use select the filter on one dashboard and navigates to different dashboards it still holds the value
Hi Everyone, Below is my query: index="abc*" OR index="xyz*" | eval raw_len=len(_raw) | eval GB=raw_len/pow(1024,3) | timechart sum(GB) as total_GB by sourcetype I  am displaying the trend for las... See more...
Hi Everyone, Below is my query: index="abc*" OR index="xyz*" | eval raw_len=len(_raw) | eval GB=raw_len/pow(1024,3) | timechart sum(GB) as total_GB by sourcetype I  am displaying the trend for last 7 days. Since its a saved search I want to display last 7 days for last 3 months. Can someone guide me how is that possible. Thanks in advance
How to manage downtime of a Splunk while version upgradation activity?
I have my source name as below, the 'user' field keeps on updating E:\test\Apps\path\EventLogs\MemoCPU\user-MemoCPU.log I don't want to display the entire path but just want user-MemoCPU as source,... See more...
I have my source name as below, the 'user' field keeps on updating E:\test\Apps\path\EventLogs\MemoCPU\user-MemoCPU.log I don't want to display the entire path but just want user-MemoCPU as source, can we achieve it?
Hi Everyone, Currently i am monitoring the *.log files under a path, i have not given a source name since we dont have a definite source The file names keep on updating My Inputs.conf [monitor://[... See more...
Hi Everyone, Currently i am monitoring the *.log files under a path, i have not given a source name since we dont have a definite source The file names keep on updating My Inputs.conf [monitor://[path]\*.log] disabled = 0 index = test sourcetype = sourcetypetest When the data is indexed into splunk, it is giving the source names as "E:\test\Apps\path\EventLogs\MemoCPU\user-MemoCPU.log'' where as i just want to extract the 'user-MemoCPU' field in the source and display in a dashboard panel. Please let me know if its possible I am building a dashboard panel with below query, index = test | stats count by source | sort -count    
How do I find the disk utilization on all my indexes. How do I write an alert for each going over a certain amount?
Hi -  I have a 4 idx cluster with smartstore.   I keep seeing these warnings on all 4 idx members >>>  Search peer <splunk-idx-1> has the following message: The minimum free disk space (512000MB) r... See more...
Hi -  I have a 4 idx cluster with smartstore.   I keep seeing these warnings on all 4 idx members >>>  Search peer <splunk-idx-1> has the following message: The minimum free disk space (512000MB) reached for /opt/splunk/var/run/splunk/dispatch.  3/11/2021, 1:30:00 AM No matter what I do to make extra room, I keep getting the warnings. On each idx the server.conf is configured locally (/opt/splunk/etc/system/local) with  [cachemanager] eviction_policy = lru #eviction_padding = 5120 eviction_padding = 10240 <<<< doubled max_cache_size = 0 hotlist_recency_secs = 86400 hotlist_bloom_filter_recency_hours = 360 evict_on_stable = false # disk usage processor settings [diskUsage] #minFreeSpace = 5000 minFreeSpace = 512000 <<<< 500GB pollingFrequency = 100000 pollingTimerFrequency = 10 The warning troubles me because if the cache_manager is evicting properly I should not be seeing this warning, or am I mistaken? I don't see a lot of misses in the MC under SmartStore Cache Performance, I see some repeated downloads, but no excessive downloads. Should I set the max_cache_size instead of the minFreeSpace setting? Per Splunk docs >>> Set limits on disk usage Note: This topic is not relevant to SmartStore indexes. See Initiate eviction based on occupancy of the cache's disk partition for information on how SmartStore controls local disk usage. Per Splunk docs >>> Disk full issues A disk full related message indicates that the cache manager is unable to evict sufficient buckets. These are some possible causes: Search load overwhelming local storage. For example, the entire cache might be consumed by buckets opened by at least one search process. When the search ends, this problem should go away.   **** this is not the case because even when search activity has ended, the warnings persist Cache manager issues. If the problem persists beyond a search, the cause could be related to the cache manager. Examine splunkd.log on the indexer issuing the error. ***** I am seeing some >>> Cache was full and space could not be reserved, warnings but I don't know how to fix this.   Any advice is greatly appreciated. Thank you
I was given admin rights at my job recently to work suppressions, and I have the ability to go to the notable event suppressions menu and do suppressions there, but when I go to incident review and a... See more...
I was given admin rights at my job recently to work suppressions, and I have the ability to go to the notable event suppressions menu and do suppressions there, but when I go to incident review and attempt to suppress from there, the option "Suppress Notable Events." is not there. Is there some sort of option I need to turn on or am I missing something entirely different?
I have an interesting dilemma and I believe there is a solution, but I can use some advice on this one. We have a log file that records submitted requests in the following format: 8038$$DRY ETCH$$3... See more...
I have an interesting dilemma and I believe there is a solution, but I can use some advice on this one. We have a log file that records submitted requests in the following format: 8038$$DRY ETCH$$3/9/2021 9:45:22 AM$$[More columns separated by double-$] The first "field" is the request ID, then the "$$", then an area, then "$$", then the actual date/time, etc. The issue is that every time a new entry is filled out, the next request ID is also added to the log file, so if the record above was the last one entered, the log file would end with the following records: 8037$$CMP$$3/9/2021 7:32:04 AM$$[More columns separated by double-$] 8038$$DRY ETCH$$3/9/2021 9:45:22 AM$$[More columns separated by double-$] 8039 My problem is that at 9:45:22 AM, Splunk is ingesting this as the event: "$$DRY ETCH$$3/9/2021 9:45:22 AM$$[More columns separated by double-$] 8039" It ingests the request ID for the current event in the last event.  There are often hours between requests.  I want the ingestion to break immediately at the [\r\n] and NOT ingest the record ID from the last row in the log file until hours later when the event is completed as a new request gets entered for the request ID. This is my props.conf stanza: [rdaeng_submissionmetrics] TIME_PREFIX = ^\s*\d+\${2}[\s\w\d-,]+\${2} MAX_TIMESTAMP_LOOKAHEAD = 65 TIME_FORMAT = %m/%d/%Y %I:%M:%S %p LINE_BREAKER = ([\r\n]+)^\s*\d+\${2}[\s\w\d-]+\${2}\d{1,2}\/\d{1,2}\/\d{4}\s+\d{1,2}:\d{2}:\d{2}\s+(AM|PM)\${2} SHOULD_LINEMERGE = false TRUNCATE = 999999 MAX_EVENTS = 2048 ANNOTATE_PUNCT = false I was thinking about removing the LINE_BREAKER and adding the BREAK_ONLY_BEFORE = \d+\${2}[\s\w\d-]+\${2}\d{1,2}\/\d{1,2}\/\d{4}\s+\d{1,2}:\d{2}:\d{2}\s+(AM|PM)\${2} Suggestions on the best method?  If I used BREAK_ONLY_BEFORE, will it still add the future request ID as the tail of the latest event? If I use MUST_BREAK_AFTER = \$\$(No|Yes).*##(No|Yes)##(No|Yes)[\r\n]+ Would it still record the 4 digit number of the next request ID as a record by itself? If I setup a transforms to throw out a 4 digit number that is the only thing in the record, would the universal forwarder send the 4 digit number the next time (doubt it because the UF keeps track of its last chunked position that it sent and the Heavy Forwarder is what throws out the request ID that came through - the UF doesn't even know it was thrown out)?  I'm stuck.  "Help me, Obi-Splunk Kenobi. You're my only hope."