All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello,  I have setup the Splunk App for AWS on Splunk Enterprise and it is working quite well except for the Topology dashboard that requires the Python for Scientific Computing application to funct... See more...
Hello,  I have setup the Splunk App for AWS on Splunk Enterprise and it is working quite well except for the Topology dashboard that requires the Python for Scientific Computing application to function.  After installing the app the AWS keeps giving the error "Insights Service unavailable The Insights Service depends on the Python for Scientific Computing application, which is not installed or incorrectly configured."  My splunk version is 7.3.6 and i have installed Python for Scientific Computing application version 1.4.  Splunk App for AWS is on version:6.0.2 Splunk Add-on for AWS is on version 4.6.1 I have tried uninstalling and restarting but the result is the same.  Any suggestions ?    Thanks. 
hi I use this search in order to retrieve events between 9h and 17h Now I also want to catch the events only between the monday and the friday How to do this please?       `CPU` | bin _time s... See more...
hi I use this search in order to retrieve events between 9h and 17h Now I also want to catch the events only between the monday and the friday How to do this please?       `CPU` | bin _time span=5h | eval slottime = strftime(_time, "%H%M") | where (slottime >= 900 AND slottime <= 1700)        
Hi, I will create an alert that tracks Windows (event id = 4726 - A user account was deleted) events. I have a user list named "user" in the adminuser.csv file. I want to exclude these users. how c... See more...
Hi, I will create an alert that tracks Windows (event id = 4726 - A user account was deleted) events. I have a user list named "user" in the adminuser.csv file. I want to exclude these users. how can I do it? Should I use lookup or inputlookup? which one is more efficient? The following query does not return the result I want. index = wineventlog source = "WinEventLog: Security" EventCode = 4726 | search src_user NOT [ | inputlookup adminuser.csv]
I have a field in log like: "policies":["Test1"] for which I am not able to search through the keyword when I have the query: index=myindex host=myhost policies=Test1   Since policies is a list ... See more...
I have a field in log like: "policies":["Test1"] for which I am not able to search through the keyword when I have the query: index=myindex host=myhost policies=Test1   Since policies is a list and I cant be able to directly search it. Is there any specific way I can search for inputs available in a list?
Hi, I have an HFW and indexer in my environment. I'm looking to filter certain events from the log source in HFW based on REGEX while indexing in the indexer.  Please find below the config files. N... See more...
Hi, I have an HFW and indexer in my environment. I'm looking to filter certain events from the log source in HFW based on REGEX while indexing in the indexer.  Please find below the config files. Not sure where I'm going wrong.   props.conf [testing] TRANSFORMS-routing = conGroup   inputs.conf [monitor:///var/logs] disabled = false index = main sourcetype =. testing host = HFW     transforms.conf [conGroup] REGEX = ^(((?!i-)(?!sample)).)*$ DEST_KEY=_TCP_ROUTING FORMAT=test   outputs.conf [tcpout] defaultGroup = nothing   [tcpout:test] server = X.X.X.X:9997   Any help will be. much appreciated. Thanks in advance
Hi, I have the below table: File_System           Disk_Usage \logs                             41 \opt                              73 \var                               69 \apps               ... See more...
Hi, I have the below table: File_System           Disk_Usage \logs                             41 \opt                              73 \var                               69 \apps                           48 Here I want to create a trellis view of the different "File_System" showing the "Disk_Usage"under a single panel(eg. xyz). Can someone please help me create the query to get the trellis view in the desired manner.   Thank you.
hi , I found some errors when using 'Splunk Add-on for Microsoft Cloud Services'. Then I use 'index=_internal sourcetype="mscs:storage:blob:log" ERROR' to troubleshoot. Details are as below. The key... See more...
hi , I found some errors when using 'Splunk Add-on for Microsoft Cloud Services'. Then I use 'index=_internal sourcetype="mscs:storage:blob:log" ERROR' to troubleshoot. Details are as below. The key word is 'The range specified is invalid for the current size of the resource.' But i'm not really sure what 'range' or 'size' means.   2021-02-09 04:54:37,584 +0000 log_level=ERROR, pid=83226, tid=ThreadPoolExecutor-0_0, file=mscs_storage_blob_data_collector.py, func_name=collect_data, code_line_no=66 | [stanza_name="logstoragetest_1" account_name="logstorage***" container_name="None" blob_name="None"] Error occurred in collecting data Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/mscs_storage_blob_data_collector.py", line 63, in collect_data self._do_collect_data() File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/mscs_storage_blob_data_collector.py", line 90, in _do_collect_data end_range=received_bytes + self._batch_size - 1) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/azure/storage/blob/baseblobservice.py", line 2093, in get_blob_to_bytes timeout) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/azure/storage/blob/baseblobservice.py", line 1917, in get_blob_to_stream raise ex File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/azure/storage/blob/baseblobservice.py", line 1886, in get_blob_to_stream timeout=timeout) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/azure/storage/blob/baseblobservice.py", line 1599, in _get_blob response = self._perform_request(request, None) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/azure/storage/storageclient.py", line 195, in _perform_request _storage_error_handler(HTTPError(response.status, response.message, response.headers, response.body)) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/azure/storage/_serialization.py", line 125, in _storage_error_handler return _general_error_handler(http_error) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/azure/storage/_error.py", line 74, in _general_error_handler raise AzureHttpError(message, http_error.status) azure.common.AzureHttpError: The range specified is invalid for the current size of the resource. <?xml version="1.0" encoding="utf-8"?><Error><Code>InvalidRange</Code><Message>The range specified is invalid for the current size of the resource. RequestId:ab2ec0c6-201e-0007-799f-fe895c000000 Time:2021-02-09T04:54:37.5760135Z</Message></Error>   Did anybody meet this before? Thanks in advance.
Hi, I have the below type of logs: log1: Mon Feb 8 02:57:36 EST 2021 41% /logs log2: Mon Feb 8 02:57:36 EST 2021 73% /opt log3: Mon Feb 8 02:57:36 EST 2021 69% /var log4: Mon Feb 8 02:57:36 EST ... See more...
Hi, I have the below type of logs: log1: Mon Feb 8 02:57:36 EST 2021 41% /logs log2: Mon Feb 8 02:57:36 EST 2021 73% /opt log3: Mon Feb 8 02:57:36 EST 2021 69% /var log4: Mon Feb 8 02:57:36 EST 2021 48% /apps I want to create a table as below: File_System           Disk_Usage \logs                             41 \opt                              73 \var                               69 \apps                           48 Here I want to extract the "Disk_Usage" and "File_System" fields with the respective values. This might be a very silly question but I might be missing out something while creating the rex command. So please help me create the rex command. you kind support will be highly appreciated.   Thank you.
Hi, I have a question related to the subscription of M365 services. Will the Splunk Add-on for Microsoft Office 365 can able to get the data/events of additional subscription-like Enterprise Mobilit... See more...
Hi, I have a question related to the subscription of M365 services. Will the Splunk Add-on for Microsoft Office 365 can able to get the data/events of additional subscription-like Enterprise Mobility + Security E3 that have been enabled for M365. Is there any Splunk documentation that states Splunk Add-on for Microsoft Office 365 can get the data/events of any additional subscriptions enabled for the M365?   Thanks
Hi! I´m looking for your help because I want to upgrade my Splunk deployment. Currently, I have all my forwarders running with the 6.2.3 version (my forwarders are running FreeBSD OS), my indexer an... See more...
Hi! I´m looking for your help because I want to upgrade my Splunk deployment. Currently, I have all my forwarders running with the 6.2.3 version (my forwarders are running FreeBSD OS), my indexer and search head are running the 7.3 version (Those devices running in CentOS 11.2). The big question here,  can I update my indexer and search head device to the 8.1.0.1 and mantain my forwarders in the 6.2.3 version? will the forwarders send events to the indexer without problem? I'm considering two scenarios for my upgrade: Forwarders in version 6.2.3 -> Indexer in version 8.1.0.1 -> Search Head in version 8.1.0.1 or Forwarders in version 6.2.3 -> Indexer in version 7.2.3 -> Search Head in version 8.1.0.1   What do you think guys?   I really appreciate any help that you can bring me!!
I have some domain like this: domain | A | B | C | D | ...... One domain can be called in one request, now I want to know what is the average request number per minute for a domain (no mat... See more...
I have some domain like this: domain | A | B | C | D | ...... One domain can be called in one request, now I want to know what is the average request number per minute for a domain (no matter what domain is). So I split it into three steps: 1) get the total request number per minute 2) get the number of domains been called per minute 3) avg = total request number per minute / number of domain per minute I have got the result of the first step by: ``` index="whatever" source="sourceurl" | bin _time span=1m | stats count as requestsPerMin by _time ``` However, I don't know how to get the number of domains that been called. For example, in a minute, domain A has been called twice, domain B has been called once, so the number of domains that been called should be two. But I don't know which query can get this result. I appreciate it if someone can help, sorry if it is duplicated.  
Hey all, quick question, and I apologize in advance if this isn't the proper sub-forum for this question. In a scaled multi-site cluster deployment is it normal for your Deployment Servers to be lis... See more...
Hey all, quick question, and I apologize in advance if this isn't the proper sub-forum for this question. In a scaled multi-site cluster deployment is it normal for your Deployment Servers to be listening on the default HEC Collection Port, TCP 8088? I ask because my Deployment Server is, even though there is no explicit configuration on the server exposing port 8088. To my knowledge I do not have any configurations under "$splunkHome/etc/system/local/" or "$splunkHome/etc/apps/" that would expose TCP 8088 to the world. There are only two lines in my "$splunkHome/etc/apps/splunk_httpinput/local/inputs.conf" file, they are: [http] useDeploymentServer = 1   As far as I know this stanza tells the Splunk Software to place any HTTP Token / configurations made via the Deployment Server UI straight into "$splunkHome/etc/deployment-apps/splunk_httpinput/local/inputs.conf" so they can be pushed out to Indexers / Heavy Forwarders, but does not serve to enable HTTP event collection via TCP 8088 on the deployment server itself.   As a newbie Splunk Admin, where else could I check to see what's causing my deployment server to listen on TCP port 8088?
hi, we have following setup 1 cluster master, 3 indexers, 1 deployement server, 3 search heads, 1 Heavy forwarder  and more than 200 potential splunk forwarder servers (linux and windows) Plans i... See more...
hi, we have following setup 1 cluster master, 3 indexers, 1 deployement server, 3 search heads, 1 Heavy forwarder  and more than 200 potential splunk forwarder servers (linux and windows) Plans is to share outputs.conf with those servers owners so they can install  and configure SF on their servers and use this outputs.conf file. The outputs.conf that I created during POC looks like this:   [indexer_discovery:poc-cluster-master] pass4SymmKey = {password value} master_uri = https://poc-cluster-master:8089 [tcpout:poc-clustermastergroup] autoLBFrequency = 30 forceTimebasedAutoLB = true indexerDiscovery = poc-cluster-master useACK = true [tcpout] defaultGroup = poc-clustermastergroup     The {dumb} questions i have that how can I share this file with everyone without sharing {password value}.  is this password could be anything or needs to be same as CM admin password and if its than Is there a better way of deploying SF on all servers without sharing this password? puppet? Any tweak or suggestion to make this stanza better (not necessarily prettier)  performance wise. regards, SR
Hello all, We have Splunk Multisite Indexer Cluster in 2 different data centers. Each Site has 3 nodes in the Cluster running Splunk Enterprise 7.3.2. We are closing down one of the sites and I nee... See more...
Hello all, We have Splunk Multisite Indexer Cluster in 2 different data centers. Each Site has 3 nodes in the Cluster running Splunk Enterprise 7.3.2. We are closing down one of the sites and I need to move these 3 indexers to a third site we are moving to. It would take up to 7 days to have the hosts moved from one site to another, racked and renamed / re-ip. Our clustering factors are: site_replication_factor = origin:2,total:3 site_search_factor = origin:2,total:3 What should be the best approach: 1) Move one host at time, wait for data replication is completed and move the next? 2) Move all hosts at the same time and add them back to the cluster one at time? 3) Do I need to place the cluster in Maintenance Mode before move? 4) At the end of move we will keep the 2 Sites environment. Should I create a new Site and move the indexers to there? Thank you very much, Gerson Garcia  
Hi,  I am trying to send metrics to the controller using Machine Agent HTTP Listener but the agent is discarding the metrics since it is unregistered. Do I have to register it manually? $URL = "h... See more...
Hi,  I am trying to send metrics to the controller using Machine Agent HTTP Listener but the agent is discarding the metrics since it is unregistered. Do I have to register it manually? $URL = "http://localhost:9999" $Endpoint = "/api/v1/metrics" $URLFull = "$URL$Endpoint" $JSON = ConvertTo-Json @(@{"metricName"="Custom Metrics|DFSR Backlog" + "|CommonData|" + $thishost;"aggregatorType"="OBSERVATION";"value"=$backlog_count}) Write-Output $JSON $JSONHeaders = @{ "Content-Type"="application/json" "Media-Type"="application/json" "Accept"="application/json"} $SendMetric = Invoke-RestMethod -Method Post -Uri $URLFull -Headers $JSONHeaders -Body $JSON This is the JSON that is being passed: [ { "aggregatorType": "OBSERVATION", "metricName": "Custom Metrics|DFSR Backlog|CommonData|SERVER1", "value": -1 } ]
I've been running Splunk for some time, and with a recent update - though I can't say I recall which one - I noticed the 'bin/splunk start' that I would run after an upgrade would fail:     Starti... See more...
I've been running Splunk for some time, and with a recent update - though I can't say I recall which one - I noticed the 'bin/splunk start' that I would run after an upgrade would fail:     Starting splunk server daemon (splunkd)... Done [ OK ] Waiting for web server at https://:443 to be available.......................................................................................................................^C     The program eventually returns from that with an error that the web server wasn't available, but it is - in fact as soon as it starts to check, it's already running fine and handling traffic.  I note the URL is strange in this output, and some times there seems to be unprintable characters in it.  I've checked etc/system/local for any files that have unprintable characters or otherwise seem amiss, but things there are fine.  web.conf specifies the IP address for the appropriate interface, and server.conf specifies the hostname for it.  Another test I just did now resulted in this:   Waiting for web server at https://rops.conf:443 to be available...   It really seems like some uninitialized value somewhere but the splunkd.log file doesn't show any related errors.  Currently running 8.1.2 on RHEL7.
Hello all! I was hoping to take a distinct count and show either the count, or if the count is 1, show the value that is being counted. For example, index=random | stats dc(src_port) AS port_count c... See more...
Hello all! I was hoping to take a distinct count and show either the count, or if the count is 1, show the value that is being counted. For example, index=random | stats dc(src_port) AS port_count count by src_ip would populate: src_ip    |      port_count ----------------------------------------------------- 1.2.3.4             6 2.3.4.5             (1) Port 443 3.4.5.6              4   Or something to this effect. Thanks!
Dear Experts: I'm new to Splunk. I have a search output device lists with events number greater than 20 as a report, for example, event_date      src                              events 2021-02... See more...
Dear Experts: I'm new to Splunk. I have a search output device lists with events number greater than 20 as a report, for example, event_date      src                              events 2021-02-08 device1 102 2021-02-08 device2 20 I need to have a new search to look into the event details on each of the device on the list to create final report and alerts if applicable. The report has to be dynamic as part of my search each time as scheduled task hourly, cannot be a static csv file as Lookup. Please advise strategies and code. Thank you. Lisa
I am creating a dashboard to collect the past 30 days of data of countries and hits.  I am new to Splunk dashboard's/report/analytics. I've learned to use splunk the past 5 days and running a query ... See more...
I am creating a dashboard to collect the past 30 days of data of countries and hits.  I am new to Splunk dashboard's/report/analytics. I've learned to use splunk the past 5 days and running a query is equivalent to coding in "Splunk" similar to how creating a dashboard in "ServiceNow" is coding in ServiceNow.  I need to know what to enter into my query to create a new column with the date of each data point. It's a simple ask and I cannot find the answer anywhere on your forum or documentation. 
Hi, I have a problem with the timestamp of my logs which is the same for all event whereas it must be one event each minute. I can also see a "none" in timestamp field : here some events r... See more...
Hi, I have a problem with the timestamp of my logs which is the same for all event whereas it must be one event each minute. I can also see a "none" in timestamp field : here some events raw : {"dimensions": ["CLOUD_APPLICATION_NAMESPACE", "CLOUD_APPLICATION_INSTANCE_DEPLOYMENT_TYPE_KUBERNETES_STATEFUL_SET"], "metricId": "builtin:cloud.kubernetes.namespace.memoryRequests", "timestamp": 1612807800000, "value": 6144000000.0} {"dimensions": ["CLOUD_APPLICATION_NAMESPACE", "CLOUD_APPLICATION_INSTANCE_DEPLOYMENT_TYPE_KUBERNETES_STATEFUL_SET"], "metricId": "builtin:cloud.kubernetes.namespace.memoryRequests", "timestamp": 1612807740000, "value": 6144000000.0} {"dimensions": ["CLOUD_APPLICATION_NAMESPACE", "CLOUD_APPLICATION_INSTANCE_DEPLOYMENT_TYPE_KUBERNETES_STATEFUL_SET"], "metricId": "builtin:cloud.kubernetes.namespace.memoryRequests", "timestamp": 1612807680000, "value": 6144000000.0} {"dimensions": ["CLOUD_APPLICATION_NAMESPACE", "CLOUD_APPLICATION_INSTANCE_DEPLOYMENT_TYPE_KUBERNETES_STATEFUL_SET"], "metricId": "builtin:cloud.kubernetes.namespace.memoryRequests", "timestamp": 1612807620000, "value": 6144000000.0} {"dimensions": ["CLOUD_APPLICATION_NAMESPACE", "CLOUD_APPLICATION_INSTANCE_DEPLOYMENT_TYPE_KUBERNETES_STATEFUL_SET"], "metricId": "builtin:cloud.kubernetes.namespace.memoryRequests", "timestamp": 1612807560000, "value": 6144000000.0} {"dimensions": ["CLOUD_APPLICATION_NAMESPACE", "CLOUD_APPLICATION_INSTANCE_DEPLOYMENT_TYPE_KUBERNETES_STATEFUL_SET"], "metricId": "builtin:cloud.kubernetes.namespace.memoryRequests", "timestamp": 1612807500000, "value": 6144000000.0} here is my props.conf (apply on the Heavy forwarder and not the search head) : [my_sourcetype] SHOULD_LINEMERGE = false TIME_PREFIX = timestamp TIME_FORMAT = %s%3Q TRUNCATE = 999999 MAX_EVENTS = 10000   Can you tell me what is wrong ?