All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a user that has asked how to get access/permissions to the "export" button while doing a search in Splunk.  It is now showing up for them when they run a search.  The first pic has that option... See more...
I have a user that has asked how to get access/permissions to the "export" button while doing a search in Splunk.  It is now showing up for them when they run a search.  The first pic has that option highlighted.    Below is a screen shot of a search they ran, without the export/download option:   Is this controlled via role or some other setting?    
Hi, I need to join two searchs. For example: Example 1:   | inputlookup join_example1.csv   country product day stock Spain apples 10/10/2022 25 France apples 10/10/2022 22 ... See more...
Hi, I need to join two searchs. For example: Example 1:   | inputlookup join_example1.csv   country product day stock Spain apples 10/10/2022 25 France apples 10/10/2022 22 Spain grapes 10/10/2022 30 France grapes 10/10/2022 28 Spain apples 10/10/2021 25 France apples 10/10/2021 22 Spain grapes 10/10/2021 30 France grapes 10/10/2021 28   Example 2:   | inputlookup join_example2.csv   day product requested 10/10/2022 apples 90 10/10/2021 apples 110 10/10/2022 grapes 100 10/10/2021 grapes 110 If I join bot searchs:   | inputlookup join_example1.csv | join product, day [| inputlookup join_example2.csv] | table product day country stock requested   The result is: product day country stock requested apples 10/10/2022 Spain 25 90 apples 10/10/2022 France 22 90 grapes 10/10/2022 Spain 30 100 grapes 10/10/2022 France 28 100 apples 10/10/2021 Spain 25 110 apples 10/10/2021 France 22 110 grapes 10/10/2021 Spain 30 110 grapes 10/10/2021 France 28 110   But I need the sub search merges only with the first result like this (only in one country): product day country stock requested apples 10/10/2022 Spain 25 90 apples 10/10/2022 France 22 0 grapes 10/10/2022 Spain 30 100 grapes 10/10/2022 France 28 0 apples 10/10/2021 Spain 25 110 apples 10/10/2021 France 22 0 grapes 10/10/2021 Spain 30 110 grapes 10/10/2021 France 28 0   That is only a example, I need only merge subsearchs results once. Anyone knows a solution for this? Thanks!!!
My default timezone is EST. How do I change it so that when other users are using my dashboards they can view it utc time or a different time zone? Or in other words displaying my result in a differe... See more...
My default timezone is EST. How do I change it so that when other users are using my dashboards they can view it utc time or a different time zone? Or in other words displaying my result in a different time zone or adding an offset.
Hello all, I'm not sure what I have been asked to do is achievable.  I'm hoping that someone can advise. We have a Windows 2003 server that cannot have a UF installed as it is not compatible with o... See more...
Hello all, I'm not sure what I have been asked to do is achievable.  I'm hoping that someone can advise. We have a Windows 2003 server that cannot have a UF installed as it is not compatible with our current environment (8.1.6).  Anyway, that aside, I have managed to ingest data using 'open' shares from a UF on a Windows 2016 server to the 2003 server. I now have a request to ingest data from a restricted share on the 2003 server.  I have tried setting up a share from the 2016 server to the 2003 server, but this does not work.  I guess because the UF is not using the same account as the share has been set up under? Can anyone tell me how I can create a share for the Splunk UF to use? Thanks
This has been asked a million times. I've been digging through the various postings but haven't figured out what I'm doing wrong.  I'm able to do a search time extraction using the rex command to ge... See more...
This has been asked a million times. I've been digging through the various postings but haven't figured out what I'm doing wrong.  I'm able to do a search time extraction using the rex command to get a field exactly the way I want it. But then when I try to add it to the field extractors, it's including too much information. I need to extract the LINK_TARGET value from the event below but the USER details are also being included in the field extractor setup.  Hopefully my redactions don't make this impossible for gurus to assist. Search command:    index="index" search_term | rex field=_raw "LINK_TARGET\s:\s(?<link_target>.*)\n"     Data:   2021-11-10 16:03:14.631 INFO [blah] [Country=US] [User=user] [ip] [DefaultLynxMetricsLogger] [blah] [blah] Metrics logging start: key blah_SEARCH_ORIGIN LINK_TARGET : https://www.blah.com/en_US/blah?utm_source=copy&utm_medium=blah&utm_campaign=blah USER : 9999999 Metrics logging end    
@sideview  Hi Nick I am using a join with mstat, but i am hoping that i dont have to. However, i cant crack it - any help would be amazing. Below is the current SPL   | mstats min("mx.process.c... See more...
@sideview  Hi Nick I am using a join with mstat, but i am hoping that i dont have to. However, i cant crack it - any help would be amazing. Below is the current SPL   | mstats min("mx.process.cpu.utilization") as cpuPerc WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=10s BY pid service.type service.name replica.name | rename "service.name" as service_name | rename "replica.name" as replica_name | rename "service.type" as service_type | eval Process_Name=((service_name . " # ") . replica_name) | sort 0 - _time | dedup _time pid | join type=left Process_Name _time [| mstats min("mx.replica.status") as Replica WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=10s BY "service.name" replica.name service.type | rename "service.name" as service_name | rename "replica.name" as replica_name | eval Process_Name=((service_name . " # ") . replica_name) | table Process_Name, Replica, "service.type", _time | sort 0 - _time | dedup _time Process_Name] | table _time Process_Name Replica cpuPerc service_type       I have tored to make it one mstats but that will not work as in this case min("mx.replica.status") as Replica as no pid so Splunk gives me back blank for this field. SO do i have to use a JOIN?     | mstats mstats min("mx.replica.status") as Replica min("mx.process.cpu.utilization") as cpuPerc WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=10s BY pid service.type service.name replica.name | rename "service.name" as service_name | rename "replica.name" as replica_name | rename "service.type" as service_type | eval Process_Name=((service_name . " # ") . replica_name) | sort 0 - _time | dedup _time pid | table _time Process_Name Replica cpuPerc service_type  
I am currently using an Input token called OS. I have three values for the token:      MAC       Windows      Linux. In my visualization I want to say:  If OS = Mac . Then run this search. ... See more...
I am currently using an Input token called OS. I have three values for the token:      MAC       Windows      Linux. In my visualization I want to say:  If OS = Mac . Then run this search. If OS = Windows. Then Run this search If OS = Linux. Then run this search.   I am aware that the EVAL command has decision logic built into it but I don't think that  it can handle sub searches inside the case. Any help is appreciated Thank you, Mark
Hello Team, need help to write query to check CPU & memory utilization of pods in splunk. Thankyou
Hello from Splunk Data Manager Team, We are excited to announce the preview of Data Manager for Splunk Cloud. Before you search through previous conversations looking for assistance, we want to prov... See more...
Hello from Splunk Data Manager Team, We are excited to announce the preview of Data Manager for Splunk Cloud. Before you search through previous conversations looking for assistance, we want to provide you with some basic information and quick resources. Want to access product docs? User Manual offers detailed guidance on each stage of using Data Manager.  Want to request more features? Add your ideas and vote on other ideas at Data Manager Ideas Portal  Want to search for a solution? Get answers from other Splunk customers & experts on the Data Manager Forum. Please reply to this thread for any questions or get extra help!
hi  I use geostats for the first time     | inputlookup gps3.csv | geostats latfield=Latitude longfield=Longitude count by Couleur   I need to display on the pie charts the number or the perce... See more...
hi  I use geostats for the first time     | inputlookup gps3.csv | geostats latfield=Latitude longfield=Longitude count by Couleur   I need to display on the pie charts the number or the percentge of Couleur And when i open my dashboard I also need to focus directly on my country instead the world map How to do this please? I also need something else  I need to count the "id" values field and to display it on my map but nothing happens What is wrong please? index=toto sourcetype=tutu site="x" OR site="y" OR site="z" | fields id site | stats dc(id) as id | lookup gps3.csv Ville as site output Latitude Longitude | geostats latfield=Latitude longfield=Longitude count(id) by site  
Hi Splunk folks,   My team is seeing a pesky issue with Palo Alto logs where a small subset are not being sourcetyped into pan:traffic/threat, etc. As the pan:log is the default, we have a few logs... See more...
Hi Splunk folks,   My team is seeing a pesky issue with Palo Alto logs where a small subset are not being sourcetyped into pan:traffic/threat, etc. As the pan:log is the default, we have a few logs that keep this sourcetype. We have attempted to regex and nullQueue out the remainder of the pan:log logs, but no success. When we implement this TRANSFORMS/Props.conf entry, we place it at the end as we understood the order followed a left to right priority. Example of a log that is being sourcetyped as "pan:log", and we would want to drop. It seems as if this is fragmented from Syslog, but nonetheless, junk to us.   000-1823048e98,0,0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,2021-11-10T10:04:03.905+00:00,,,infrastructure,networking,network-protocol,3,"used-by-malware,has-known-vulnerability,pervasive-use",,dns,no,no,0   Palo Alto props.conf: [pan_log] pulldown_type = false SHOULD_LINEMERGE = false TIME_PREFIX = ^(?:[^,]*,){5} MAX_TIMESTAMP_LOOKAHEAD = 100 TRANSFORMS-sourcetype = pan_threat, pan_traffic, pan_system, pan_config, pan_hipmatch, pan_correlation, pan_userid, pan_globalprotect, pan_decryption   We added a new nullQueue entry into the transforms.conf and then inserted the pan_discard after the last entry in the transforms above. [pan_discard] REGEX = . DEST_KEY = queue FORMAT = nullQueue   Any suggestions?
Hello Team, I am new to splunk and we have integrated splunk with OCP logs. I am able to see the logs but not sure how to write query for example to fetch pods's active status,CPu utilization ,meor... See more...
Hello Team, I am new to splunk and we have integrated splunk with OCP logs. I am able to see the logs but not sure how to write query for example to fetch pods's active status,CPu utilization ,meory utilization,API response time.Any help would highly appreciated. Thanks in advance
I'm a bit confused. I tried to colour one of the columns in my table by the range of values in that table. And it works. Almost. The problem is that colouring works separately for each page of re... See more...
I'm a bit confused. I tried to colour one of the columns in my table by the range of values in that table. And it works. Almost. The problem is that colouring works separately for each page of results which is a bit ridiculous.   These are two pages from the same search. Kinda ridiculous, isn't it? Is it that I'm doing something wrong? But it seems that there's no simpler way than chose "range", preset... But how to make it work across whole results range (so my first screen shoud be mostly orange-ish (sorry, don't know how this colour is called in english), and the second should be blue)?
in deployment-apps i created apps without local folder after reloading server class local folder automatically generates on client server. any idea why splunk generates local folder in app/
Hi, How can I find the total number of the batch jobs running for a time frame or for any instance and show them in the dashboard for any particular CSV file?    Please help me in this.  
What does the error below mean and how to remediate it? This is after running `splunk restart splunkweb`   HTTP/1.1 404 Not Found
I've setup an alert , where i'm saying send alert as soon as 1 record is found. But actually i want to wait for few more events to happen in the next 5 minutes. I want my alert to wait for 5 minutes ... See more...
I've setup an alert , where i'm saying send alert as soon as 1 record is found. But actually i want to wait for few more events to happen in the next 5 minutes. I want my alert to wait for 5 minutes and collect all the events, and then send report. Is there a way to make my alert wait until it fetched all the events that'll happen in the next five minutes?
Hello , Following the upgrade of our corporate version of Splunk, we noticed a recurring problem with IOwait (for your information, we went from version 8.1.0.4 to 8.2.2.1). Below are the charact... See more...
Hello , Following the upgrade of our corporate version of Splunk, we noticed a recurring problem with IOwait (for your information, we went from version 8.1.0.4 to 8.2.2.1). Below are the characteristics of the machine that runs Splunk in standalone: - 4 CPUs - 8 GB of RAM - 50 GB of disk space We did not have this alert on previous version 8.1.0.4, can you give us the reason for the alert since this version upgrade ? Best regards.
Hello , Goal of recovering in PowerBi, datas stored in Splunk, we want to install the certificate of our company on port 8089 for communication with the Splunk API. Please give us the procedure t... See more...
Hello , Goal of recovering in PowerBi, datas stored in Splunk, we want to install the certificate of our company on port 8089 for communication with the Splunk API. Please give us the procedure to do this, knowing that we are on version 8.2.2.1 and that our machine is a development machine (standalone) which communicates with a remote LM on port 8089 of the API. (FYI, this company certificate is already used for the Splunk web interface on port 8000) Best regards.
Hi Guys Wanted to know if anyone knows if you can populate a summary index from a data model. the summary index query requires the si* prefix to transforming commands (sistats) but the datamodel sea... See more...
Hi Guys Wanted to know if anyone knows if you can populate a summary index from a data model. the summary index query requires the si* prefix to transforming commands (sistats) but the datamodel search also requires a tstats command so they cannot be used in conjunction