All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, We are planning to migrate entire Splunk environment to new servers next week and need step by step process. The below document is not quite  helpful to understand the migration. Could anyon... See more...
Hi All, We are planning to migrate entire Splunk environment to new servers next week and need step by step process. The below document is not quite  helpful to understand the migration. Could anyone please provide us the procedure based on our environment. https://docs.splunk.com/Documentation/Splunk/8.1.1/Installation/MigrateaSplunkinstance Architecture(Linux) :- Server1 - Cluster master and Deployer with different Splunk instance Server2 - Search head 1 (SHC) Server3 - Search head 2 (SHC) Server4 - Search head 3 (SHC) Server5 - Indexer 1 (Indexer clustering) Server6 - Indexer 2 (Indexer clustering) @gcusello @somesoni2  BR, Devang
Hello All, I have Two  errors on the search cluster for ANY search in Splunk. Two errors per indexer: ERROR - Could not load lookup=LOOKUP-minemeldfeeds_dest_lookup  ERROR - Could not load lookup=... See more...
Hello All, I have Two  errors on the search cluster for ANY search in Splunk. Two errors per indexer: ERROR - Could not load lookup=LOOKUP-minemeldfeeds_dest_lookup  ERROR - Could not load lookup=LOOKUP-minemeldfeeds_src_lookup Can anyone give me in the right direction for a solution to this errors?  Thanks
Hi I create simple dropdown list on dashboard.   Now I want to modify query with this dropdown list  e.g. my dropdown list: APP1, APP2, APP3,... my query: index="myindex" source="/data/APP1"  ... See more...
Hi I create simple dropdown list on dashboard.   Now I want to modify query with this dropdown list  e.g. my dropdown list: APP1, APP2, APP3,... my query: index="myindex" source="/data/APP1"   when I select APP2 query change to this  index="myindex" source="/data/APP2"   FYI: I don't want to use field or any think else just directly modify query   Any Idea? Thanks,
Can someone please guide how I can collect the following logs from Linux systems ? changes to account privileges. unsuccessful login and log off events for privileged accounts. any access attempts... See more...
Can someone please guide how I can collect the following logs from Linux systems ? changes to account privileges. unsuccessful login and log off events for privileged accounts. any access attempts using deactivated accounts Any help on this would be highly appreciated.
Hello! This is regarding the Splunk built Timeline viz: https://splunkbase.splunk.com/app/3120/ My question: will it ever be "elevated" to a standard Splunk viz that allows us more customization po... See more...
Hello! This is regarding the Splunk built Timeline viz: https://splunkbase.splunk.com/app/3120/ My question: will it ever be "elevated" to a standard Splunk viz that allows us more customization possibilities?  For example: Moving the legend around (I want it underneath instead of to the right) Drilldown! More flexibility for changing colours Ability to add additional field details to the tooltip It's a great viz, so would be great for it to be just that little bit more powerful! Thanks! Andrew  
I have a table like in splunk this: appname  value time app1 10 2020-12-30 app1 12 2020-12-31 app2 23 2020-12-30 app2 20 2020-12-31   I want to filter the records that ... See more...
I have a table like in splunk this: appname  value time app1 10 2020-12-30 app1 12 2020-12-31 app2 23 2020-12-30 app2 20 2020-12-31   I want to filter the records that the value is increasing while the time. In this case, we can only find  (app1, 12 2020-12-31) How could I write the splunk sql to implement this?
Hi Experts I have a textbox and 2 radio buttons in a dashboard. I want to reset the textbox to empty on click of the radio button. I am able to unset the token by using the set/unset token but the t... See more...
Hi Experts I have a textbox and 2 radio buttons in a dashboard. I want to reset the textbox to empty on click of the radio button. I am able to unset the token by using the set/unset token but the text in the textbox does not go away. Is it possible to change/remove the text of the textbox on click of the radio button.   Thanks in Advance!
I am having difficulty combining two individual searches.  I have the following ldap search that lists the member names from group1 or group2 | ldapsearch search="(&(objectClass=group)(|(cn=group1)(... See more...
I am having difficulty combining two individual searches.  I have the following ldap search that lists the member names from group1 or group2 | ldapsearch search="(&(objectClass=group)(|(cn=group1)(cn=group2)))" attrs="member" | ldapfetch dn=member attrs="givenName, sn" | eval user=givenName." ".sn | table user I want the ldap search to list the member names when it meets the criteria of the base search: index=myindex EventCode=5136 action=success name="A directory service object was modified" How do I combine the two?
Is it possible to disable a dashboard. I can see it is very easy to disable an App or a Report but I cant find anything to simply disable a Dashboard.  A i missing something? thanks
Hello friends,   Been hacking away at the G Suite For Splunk app for about a day trying to figure out why we're getting the above error message on some inputs. I've gone all through the customer's ... See more...
Hello friends,   Been hacking away at the G Suite For Splunk app for about a day trying to figure out why we're getting the above error message on some inputs. I've gone all through the customer's proxy rules and ensured that's all good, as well as ensuring the install and permissioning of Splunk is also all good.   Here's the full error message I am seeing. 2021-02-01 03:27:37,481 log_level=ERROR pid=16382 tid=MainThread file="ModularInput.py" function="print_error" line_number="672" version="GSuiteForSplunk.v1.4.2.b310" host=GSuiteForSplunk sourcetype=GSuiteForSplunk:error source=gapps:SomeCustomers.com {"timestamp": "Mon, 01 Feb 2021 03:27:37 +0000", "log_level": "ERROR", "errors": [{"msg": "'dict' object is not callable", "exception_type": "<class 'TypeError'>", "exception_arguments": "'dict' object is not callable", "filename": "ga.py", "line": 164, "input_name": "ga://Activity_Access_Transparency"}]}   Splunk Version: 8.0.5 OS: CentOS 7.9.2009 (x86_64) out of AWS Stackoverflow gives me back a fix that involves changing around the brackets that are used in the script for calling back a dictionary. Not sure I want to touch that one just yet. https://stackoverflow.com/questions/6634708/typeerror-dict-object-is-not-callable#6634727   Any help or pointers on this would be very much appreciated.  
Hi, I am new to splunk but have noticed that in the Settings- Indexes screen there are columns for these values: Event Count Earliest Event Latest Event These are very useful but one one parti... See more...
Hi, I am new to splunk but have noticed that in the Settings- Indexes screen there are columns for these values: Event Count Earliest Event Latest Event These are very useful but one one particular installation I am supporting there are no values for these columns and the current size for all these indexes shows as 1MB. Splunk version is 7.3.6 Any idea what could be causing this? Thanks
I have logs that are stored in Micrsoft Blob Storage which are compressed as .xz files, but they are not named with that extension, they are in the format: kuberenetes-<datetime> ( example: kubernete... See more...
I have logs that are stored in Micrsoft Blob Storage which are compressed as .xz files, but they are not named with that extension, they are in the format: kuberenetes-<datetime> ( example: kubernetes-202101310701).  What I'm trying to do is ingest these logs into Splunk using the Microsoft Cloud Services app.  Because these files are compressed, I believe I need to run the unarchive_cmd against it using props.conf, but I'm not sure this is even supported with this app.  I've searched high and low and have not come across any information that supports it.  As a side note, these files are kuberenetes logs coming from SAP CC2V so I do not have any control of how they are written to blob storage, I can only access them after the fact.  When I enable the application the data starts to stream in but it's all gibberish because the files are compressed.  Here is what I get: 1/31/21 10:32:25.000 AM   Geq��)�5xi� ��B�;X�%���Ul���N�ioG�����X��o��47`�RK�Bd�g�x�A���ʪe���a�E�V�����xUS<x�5=�H�R�4��2 Type   Field Value Actions Event   timestamp none   Time   _time 2021-01-31T10:32:25.000-08:00   Default   host ip-10-151-4-90     index test     punct )t;%<=     source kubernetes-202101311433     sourcetype mscs:storage:blob:k8     splunk_server idx-i-<redacted>?.splunkcloud.com     Below is what I'm trying... input.conf:     [mscs_storage_blob://SAP S3 Logs] disabled = 0 account = SAP S3 blob_list = kubernetes* blob_mode = append collection_interval = 3600 container_name = commerce-logging sourcetype = mscs:storage:blob:k8 index = test     props.conf:     [source::...(.*)] invalid_cause = archive unarchive_cmd = /usr/bin/xz -cd - sourcetype = mscs:storage:blob:k8 KV_MODE = json NO_BINARY_CHECK = true [mscs_storage_blob://SAP S3 Logs] invalid_cause = archive unarchive_cmd = /usr/bin/xz -cd - sourcetype = mscs:storage:blob:k8 KV_MODE = json NO_BINARY_CHECK = true [mscs:storage:blob] invalid_cause = archive unarchive_cmd = /usr/bin/xz -cd - sourcetype = mscs:storage:blob:k8 KV_MODE = json NO_BINARY_CHECK = true [mscs:storage:blob:k8] invalid_cause = archive unarchive_cmd = /usr/bin/xz -cd - sourcetype = mscs:storage:blob:k8 KV_MODE = json NO_BINARY_CHECK = true     I know the props.conf is not correct or does not need that many stanzas, but I tried adding all of these in an attempt to get it to work as I'm not even sure it's using the props.conf file.  As a side note, if I decompress the file in Azure Blob and then ingest it, it works perfectly.  So the question is, can I use the 'invalid_cause' and 'unarchive_cmd' in the props for Microsoft Cloud Services app?  If this doesn't work I need to come up with another solution, and I'm thinking I can just copy the files locally and then run it through a standard file monitor process and attempt to run the unarchive command there.
How to Convert  _time             ColumnA                  ColumnB  timeA             10                                20 into  _time             Fields           Value TimeA            C... See more...
How to Convert  _time             ColumnA                  ColumnB  timeA             10                                20 into  _time             Fields           Value TimeA            ColumnA    10 TimeA            ColumnB    20
I am trying use REST API modular input in order to get data in Splunk from a REST endpoint. Unfortunately to get all the data I need to add data from an ever increasing number for URL's. Let me try t... See more...
I am trying use REST API modular input in order to get data in Splunk from a REST endpoint. Unfortunately to get all the data I need to add data from an ever increasing number for URL's. Let me try to explain better. https://[URL for the API]/ is a top level for the API. This contains common data for across each collection of information I need from the API. 1 piece of information contained in here is ID, this can be appended to the URL to bring back detailed information on that 1 thing. I can successfully get into Splunk from this with a very simply configuration https://[URL for the API]/[ID]  this is the detailed information about 1 record in the top level of the API. Currently there are thousands of these and it is growing. So far I cannot find a way to get this data into Splunk without setting up individual data inputs for each 1. Is there a way to get data from every ID sub page into Splunk through setting variables or wildcards in the configuration of a REST API modular data input, or am I on the wrong path and I need to take a completely different route to solving this?
Hi, I'm using the free cloud trial, and none of the URLs suggested within the documentation work. [HOST]/services/collector throws a 303 error, redirecting to [HOST]/en-GB/services/collector which ... See more...
Hi, I'm using the free cloud trial, and none of the URLs suggested within the documentation work. [HOST]/services/collector throws a 303 error, redirecting to [HOST]/en-GB/services/collector which in turn throws a 404 error. input-[HOST], inputs-[HOST], http-inputs-[HOST] do not resolve. inputs.[HOST] resolves, but throws an SSL error as the wildcard cert attached to it does not cover the extra tier in the FQDN. [HOST]:8088 resolves, but throws an SSL error as the cert attached to it does not match the FQDN (SplunkServerDefaultCert). Any idea what I should be using? TIA, Martin...  
Greetings!!! Kindly help me to more understand  the purpose of fine-tune queries? based on your experience? 
Hi, I would like to increase the cold retention period for index [pa] to 180 days, but when i  get into indexes.conf i only see below configuration there is no frozenTimePeriodInSecs = for index pa ... See more...
Hi, I would like to increase the cold retention period for index [pa] to 180 days, but when i  get into indexes.conf i only see below configuration there is no frozenTimePeriodInSecs = for index pa   # Index for Palo Alto Networks # This index is required by Splunk_TA_paloalto [pa] repFactor = auto homePath   = volume:hot/pa/db homePath.maxDataSizeMB = 512000 coldPath   = volume:cold/pa/colddb coldPath.maxDataSizeMB = 512000 thawedPath = $SPLUNK_DB_THAWED/pa/thaweddb coldToFrozenDir = $SPLUNK_DB_FROZEN/pa/frozendb     where can i find the time setting for this index?  
  Currently deploying a solution at all client's environment using version 8.x, however an existing third party has already some servers where Splunk v7.0 is deployed. To avoid responsibility conflic... See more...
  Currently deploying a solution at all client's environment using version 8.x, however an existing third party has already some servers where Splunk v7.0 is deployed. To avoid responsibility conflicts and total separation of Splunk, we are working on different location and ports however we are not able to locate conf file to change Splunkd to something else. This would prevent either team to kill incorrectly other daemon by mistake.  Please any lead will help us. Tks.
Hi all,   I just want to ask if it makes sense to backup the indexed data before the upgrade of peer nodes (indexer nodes) or is it enough to backup only the home directoy of splunk.    thanks ... See more...
Hi all,   I just want to ask if it makes sense to backup the indexed data before the upgrade of peer nodes (indexer nodes) or is it enough to backup only the home directoy of splunk.    thanks    
For a certain time range, I want to group together the counts in a single row, divided into equal time slices. For example, for  "-15m" I want see 5-minute counts something like this: index Last15M... See more...
For a certain time range, I want to group together the counts in a single row, divided into equal time slices. For example, for  "-15m" I want see 5-minute counts something like this: index Last15MinCount Last10MinCount Last5MinCount APP1     100                      123                               345 APP2    32                          55                                   60 The idea is for me to compare the Last5MinCount to the Avg of Last15MinCount and Last10MinCount. I could not find a suitable way of simplifying my query, but I got this instead (note: times shd have 'at'm; this forum links 'at' to members): index=* earliest=-15m | rex field=message "(?i)(?<ORG>[C]{0,1}+MS\w*)+(?i)\.(?<ENV>[dev|test|prod]+(-pci){0,1})\.+(?i)(?<APP>[\w-]+)" | rex field=_raw ".*(?<level>LEVEL)[\s\S]{0,5}(?<code>FATAL|ERROR|WARN|DEBUG|INFO).*" | eval time15=relative_time(now(), "-15m") | eval time10=relative_time(now(), "-10m") | eval time05=relative_time(now(), "-05m") | eval time00=relative_time(now(), "-00m") | eval etime=_time | eval Time=case(tonumber(etime)>tonumber(time15) AND tonumber(etime) <= tonumber(time10), "Last15", tonumber(etime)>tonumber(time10) AND tonumber(etime) <= tonumber(time05), "Last10", tonumber(etime)>tonumber(time05) AND tonumber(etime) <= tonumber(time00), "Last05") | stats count(eval(Time=="Last15E")) AS Last15 count(eval(Time=="Last10E")) AS Last10 count(eval(Time=="Last05E")) AS Last05 by APP This gives me the desired rows. My question is about these lines: | eval time15=relative_time(now(), "-15m") | eval Time=case(tonumber(etime)>tonumber(time15) AND tonumber(etime) <= tonumber(time10), "Last15" and | stats count(eval(Time=="Last15E")) AS Last15 can probably be combined into one line, but I could not find the most apropriate function. Any help in simplifying this would be appreciated. Thanks!