All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

In splunk >> apps. I need to build a query with order by clause which is working fine. but my requirement is to build a query with ORDER By Desc
Sinse some time (3-4 weeks?) add-on doesn't download data. Is it only for me or anone else is seeing such behavior?
Symptoms and tests to confirm The entire cluster becomes unstable with the Cluster Master showing flapping of indexers from up to down. With farm of two layer proxy servers. You will see interm... See more...
Symptoms and tests to confirm The entire cluster becomes unstable with the Cluster Master showing flapping of indexers from up to down. With farm of two layer proxy servers. You will see intermittent HTTP errors uploading to smart store. 10-07-2019 15:13:42.821 +0100 ERROR RetryableClientTransaction - transactionDone(): groupId=(nil) rTxnId=… transactionId=…. success=N HTTP-statusCode=502 HTTP-statusDescription="network error" retries=0 retry=N no_retry_reason="no retry policy" remainingTxns=0 10-07-2019 15:13:42.821 +0100 ERROR CacheManager - action=upload, cache_id="bid|_internal~….|", status=failed, unable to check if receipt exists at path=_internal/db/…/receipt.json(0,-1,), error="network error" 10-07-2019 15:13:42.821 +0100 ERROR CacheManager - action=upload, cache_id="bid|_internal~…|", status=failed, elapsed_ms=15016 Crashlogs with: [build 7651b7244cf2] 2019-10-07 11:17:36 Received fatal signal 6 (Aborted). Cause: Signal sent by PID 2599 running under UID 0. Crashing thread: cachemanagerUploadExecutorWorker-180 Testing: ./splunk cmd splunkd rfs – ls --starts-with volume:XXXXXXX Returns no results because of Connection Timeout with Bad Gateway 502 Testing: wget on aws s3 instance returns bad gateway. To confirm the issue with a repro Step 1. change below parameters values in sever.conf to 200 [cachemanager] max_concurrent_downloads = 200 max_concurrent_uploads = 200 Step 2. Block the connection from peers to S3 using echo "127.0.0.1 s3-us-west-2.amazonaws.com" >> /etc/hosts What was observed - 1. Peers unable to upload the buckets to remote storage(which is obvious) 2. Peers constantly retrying to upload the buckets 3. Peers were marked Down by CM since peers could not heartbeat to the CM as they were constantly busy retrying the upload of buckets with so many threads in parallel, which is causing extra pressure on CMSLave lock. Below is the pstack i collected from one of the indexer - Thread which is holding the CMSlave lock while making a S3 Head request to check if file is present or not on S3: Thread 14 (Thread 0x7f8b04dff700 (LWP 8834)): 0 syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38 1 0x00005639b0d24e27 in EventLoop::run() () 2 0x00005639b0dece00 in TcpOutboundLoop::run() () 3 0x00005639b08928e9 in RetryableClientTransaction::_run_sync(bool) () 4 0x00005639b0930c44 in S3StorageInterface::fileExists(StorageObject const&, Str*, RemoteRetryPolicy*) () 5 0x00005639b04eb4b0 in cachemanager::CacheManagerBackEnd::isRemoteBucketPresent(cachemanager::CacheId const&, Pathname const&, bool, ScopedPointer*) const () 6 0x00005639b04f2bc1 in cachemanager::CacheManagerBackEnd::isBucketStable(cachemanager::CacheId const&, cachemanager::CacheManagerBackEnd::CheckScope, bool, ScopedPointer*) () 7 0x00005639b03435c7 in DatabaseDirectoryManager::isBucketStable(cachemanager::CacheId const&, cachemanager::CacheManagerBackEnd::CheckScope, bool, bool, ScopedPointer*) () 8 0x00005639b0f92f64 in CMSlave::manageReplicatedBucketsTimeoutS2_locked() () 9 0x00005639b0f93c9d in CMSlave::service(bool) () 10 0x00005639b00e09f3 in CallbackRunnerThread::main() () 11 0x00005639b0dedfa9 in Thread::callMain(void*) () 12 0x00007f8b0d9614a4 in start_thread (arg=0x7f8b04dff700) at pthread_create.c:456 13 0x00007f8b0d6a3d0f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97 While other threads such heartbeat thread and other operations are waiting for this lock to be released - Heartbeat thread waiting for the lock- Thread 60 (Thread 0x7f8afa7ff700 (LWP 9053)): 0 __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135 1 0x00007f8b0d963bb5 in GI_pthread_mutex_lock (mutex=0x7f8b0d0818f8) at ../nptl/pthread_mutex_lock.c:80 2 0x00005639b0dedcd9 in PthreadMutexImpl::lock() () 3 0x00005639b0f71f55 in CMSlave::getHbInfo(Str&, Str&, unsigned int&, CMPeerStatus::ManualDetention&, bool&, long&, unsigned long&) () 4 0x00005639b1005b8c in CMHeartbeatThread::when_expired(Interval*) () 5 0x00005639b0df634c in TimeoutHeap::runExpiredTimeouts(MonotonicTime&) () 6 0x00005639b0d24d86 in EventLoop::run() () 7 0x00005639b01225da in CMServiceThread::main() () 8 0x00005639b0dedfa9 in Thread::callMain(void*) () 9 0x00007f8b0d9614a4 in start_thread (arg=0x7f8afa7ff700) at pthread_create.c:456 10 0x00007f8b0d6a3d0f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97 Even searches might be blocked on this lock - Thread 81 (Thread 0x7f8afe1ff700 (LWP 10428)): 0 __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135 1 0x00007f8b0d963bb5 in GI_pthread_mutex_lock (mutex=0x7f8b0d0818f8) at ../nptl/pthread_mutex_lock.c:80 2 0x00005639b0dedcd9 in PthreadMutexImpl::lock() () 3 0x00005639b0f948ec in CMSlave::writeBucketsToSearch(unsigned long, Clustering::SiteId const&, Clustering::SummaryAction, Str&) () 4 0x00005639b13a0822 in DispatchCommand::dumpClusterSlaveBuckets(SearchResultsInfo&) () 5 0x00005639b1429152 in StreamedSearchDataProvider::handleStreamConnectionImpl(HttpCompressingServerTransaction&, SearchResultsInfo*, Str*) () 6 0x00005639b142bbb5 in StreamedSearchDataProvider::handleStreamConnection(HttpCompressingServerTransaction&) () 7 0x00005639b0c38d4d in MHTTPStreamDataProvider::streamBody() () 8 0x00005639b07db115 in ServicesEndpointReplyDataProvider::produceBody() () 9 0x00005639b07d28ff in RawRestHttpHandler::getBody(HttpServerTransaction*) () 10 0x00005639b0d558fb in HttpThreadedCommunicationHandler::communicate(TcpSyncDataBuffer&) () 11 0x00005639b0119e42 in TcpChannelThread::main() () 12 0x00005639b0dedfa9 in Thread::callMain(void*) () 13 0x00007f8b0d9614a4 in start_thread (arg=0x7f8afe1ff700) at pthread_create.c:456 14 0x00007f8b0d6a3d0f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97 This explains why the cluster was super unstable when we had issues uploading the bucket and explains #1 and #3. This dependency on CMSlave lock has already been fixed in 8.0.1 About #2 since customer set max_concurrent_downloads/uploads = 200, there were so many concurrent uploads to S3, via proxy that it locked out and started backing up. At one time, it closed the connection on indexers and upload retries started and timeouts appeared.
Hello, I created a specific role for some users with a defined maximum time window. Hence, these users are not allowed to search for more tha 7 days. As specified in the docs, srchTimeWin =... See more...
Hello, I created a specific role for some users with a defined maximum time window. Hence, these users are not allowed to search for more tha 7 days. As specified in the docs, srchTimeWin = <integer> * Maximum time span, in seconds, of a search. * This time window limit is applied backwards from the latest time specified in a search. The problem is that when time window specified in the search is greater than the maximum time window, there are no messages that can warn the user that his search time window has been reduced. Is there a way to display an error or a message when this happens ?
Hello, We have more than 700 jobs with status parsing on the indexer. We able to delete these jobs only after stopping the splunk service on the SearchHead. But these jobs kept coming back after s... See more...
Hello, We have more than 700 jobs with status parsing on the indexer. We able to delete these jobs only after stopping the splunk service on the SearchHead. But these jobs kept coming back after starting the splunk Service on the SH. We need your help. Thanks in advance
index="redshift" source="mi_input://RASDB-Input-sometext-*"|lookup country_lookup abbreviation as ID OUTPUT fullname as Country|eval Country=Country."[".ID."]" |rename CALL_COST as Currency| dedup C... See more...
index="redshift" source="mi_input://RASDB-Input-sometext-*"|lookup country_lookup abbreviation as ID OUTPUT fullname as Country|eval Country=Country."[".ID."]" |rename CALL_COST as Currency| dedup Country |sort -Currency |head 10|table Country,Currency |where Currency!=0
Hello, I was curious to see if there are any best practices for mapping to CIM data models. More specifically, I'm looking for some guidelines on when (not) to map a certain field to a datamodel. ... See more...
Hello, I was curious to see if there are any best practices for mapping to CIM data models. More specifically, I'm looking for some guidelines on when (not) to map a certain field to a datamodel. Of course I can map all fields to the default inherited and calculated fields of the data model. But what about fields that are not present in the data model by default? Should you create a calculated field in the data model for every calculation in your search? Or should you leave the data model as default as possible and leave the calculations in your search? In other words, I have a search that calculates a large number of extra fields through evals and lookups. I want to speed up and generalize this search by mapping to a CIM data model. Which fields should I leave in the search (after tstats) and which fields should I map to the data model (so that I can retrieve them with tstats)? Should I add calculated fields to the data model for my extra fields, so that I can retrieve all details through a single tstats command? Alternatively, should I leave the data model as default as possible and calculate the fields in the search (after the tstats command)? Thank you for any help you can offer!
I am having the same error message "This saved search cannot perform summary indexing because it has a malformed search" while setting up the summary index. - The search with lookup and macros runs ... See more...
I am having the same error message "This saved search cannot perform summary indexing because it has a malformed search" while setting up the summary index. - The search with lookup and macros runs perfectly and provides results. - My lookups and macros in the search are set within permissions. What else do I need to set/edit for Summary indexing?
Hi Team, Currently I am working on a UF Auto installation script where the script has to automatically upgrade the UF package on all Linux boxes (that have v6.5.3) running to v7.3.4 using this script... See more...
Hi Team, Currently I am working on a UF Auto installation script where the script has to automatically upgrade the UF package on all Linux boxes (that have v6.5.3) running to v7.3.4 using this script. The script should work as below: Check for any existing Splunk UF version on the Linux box, if it has UF v6.5.3 is already running then, stop the UF agent, upgrade the Splunk UF package - v7.3.4 (Untar the splunkforwarder.tgz) package and then start the Splunk services. Post that it should connect to a DS (updating deploymentclient.conf) with DS and 8090 port details. If the Linux box doesn't have any Splunk UF package installed then, the script should freshly install the UF v7.3.4 package on that Linux server and then connect to DS Wanted to check if you have any reference shell script for the above upgrade/installation. Please note I will just use that script for reference purpose only and I won't use it directly as I don't have much details on shell scripting syntax. Request your help on this. regards, Santosh
In our environment Nagios and Splunk are integrated. We configured an alert in Nagios monitoring tool which fetches data from Splunk but in Nagios monitoring tool, it is showing as "UNKNOWN - Error i... See more...
In our environment Nagios and Splunk are integrated. We configured an alert in Nagios monitoring tool which fetches data from Splunk but in Nagios monitoring tool, it is showing as "UNKNOWN - Error in Application name "wms" ". The alert is configured in such that it is using the script check_splunk_savedsearch_value.sh and it is taking three arguments. check_splunk_savedsearch_value.sh -a wms -s "WMOS - EW - Number of Allocation records" -w 1 [root@nagios server]# ./check_splunk_savedsearch_value.sh -a wms -s "WMOS - EW - Number of Allocation records" -w 1 UNKNOWN - no output returned from splunk.ce.corp|"wms:WMOS - EW - Number of Allocation records"=ERROR When we ran the script in debugging mode, the following command is not returning any output. [root@nagios server ]# /usr/bin/curl -s -k -u username:password https://splunk.ce.corp:8089/servicesNS/monitor/wms/search/jobs/export -d 'search=savedsearch %22WMOS%20%2d%20EW%20%2d%20Number%20of%20Allocation%20records%22' -d output_mode=csv|sed 1d [root@nagios server]# Saved search "WMOS - EW - Number of Allocation records" does exist on that search head. When we click on the saved search, Following is the error it is showing on Web page. Error in 'script': Getinfo probe failed for external search command 'dbquery' The search job has failed due to an error. You may be able view the job in the Job Inspector. Following is the search query. | dbquery "wmsewprd" "select count(*) from alloc_invn_dtl where stat_code = 11 and (sysdate - mod_date_time) * 24 * 60 > 30" When we remove | from the above search query it is showing events. Is the | in the above search query making difference ? If No, what could be the problem. dbquery "wmsewprd" "select count(*) from alloc_invn_dtl where stat_code = 11 and (sysdate - mod_date_time) * 24 * 60 > 30"
how can we send the data from splunk Heavy forwarder to Elastic search directly without sending to logstash in middle. From HF i cant configure the outputs.conf file with username and passwd of ela... See more...
how can we send the data from splunk Heavy forwarder to Elastic search directly without sending to logstash in middle. From HF i cant configure the outputs.conf file with username and passwd of elastic to send the data to elastic. Can any one help how a splunk HF can forward the data to elastic directly without using logstash.
Hi, I ran a query: | ldapsearch search="(&(objectClass=user)(!(objectClass=computer)))" attrs="sAMAccountName,distinguishedName,lastLogon,lastLogonTimestamp,division" I found there are couple... See more...
Hi, I ran a query: | ldapsearch search="(&(objectClass=user)(!(objectClass=computer)))" attrs="sAMAccountName,distinguishedName,lastLogon,lastLogonTimestamp,division" I found there are couple accounts witch lastLogon is null. But that field has value when I check that account in Active Directory. It is confusing because only some accounts have that issue. Tks Linh
When saved as a Dashboard Panel there is a Visualization called Statistics Table (and also Events ) but these are not available when saving as a Report . Is there any way to keep it on the S... See more...
When saved as a Dashboard Panel there is a Visualization called Statistics Table (and also Events ) but these are not available when saving as a Report . Is there any way to keep it on the Statistics tab?
Hello Everyone I am trying to see if i can pass an event field over to a lookup attached with a wildcard (reverse lookup from event filed) ? For this an example I will use the items below tab... See more...
Hello Everyone I am trying to see if i can pass an event field over to a lookup attached with a wildcard (reverse lookup from event filed) ? For this an example I will use the items below table = user_table.csv lookup = user_table_loookup user_table.csv data below: email, manager_name user1@domain_1.com, "Doe, John" I have an event field within an index of . I then have a lookup table (.csv) that contains a column email and manager_name* within the user_table_loookup. Is it possible to attach a wildcard to the username filed and send it against the lookup table to match the username portion of the email and return the manager_name from the lookup? index=index_1 username=user1 | lookup user_table_loookup email AS username OUTPUT manager_name username >> email user1 >>>> user1@domain_1.com
I can check that 80% of my disk is used in my Search Head. How to decrease it and what exactly is taking up space? This SH is not the INDEXER, therefore it does not store incoming data.
i have a drilldown setup to open 2 different pages on 2 different clicks on the page. but what i see is , whatever the click i do on the page , it only opens the first link, ie test1. any body kn... See more...
i have a drilldown setup to open 2 different pages on 2 different clicks on the page. but what i see is , whatever the click i do on the page , it only opens the first link, ie test1. any body know what the issue is here. <condition match="match('click.name2', &quot;Today&quot;)"> <eval token="test1">mvindex(split('click.value', "Test1"),0)</eval> <link> <![CDATA[/app/test1/test1?form.Name=$test1]]> </link> </condition> <condition match="match('click.name2', &quot;Today&quot;)"> <eval token="test2">mvindex(split('click.value', "Test2"),0)</eval> <link> <![CDATA[/app/test2/test?form.Name=$test2$]]> </link> </condition> <condition></condition> </drilldown>
Our company uses PRTG to monitor all network traffic/bandwidth. We are currently deploying Splunk and would like to add Splunk network interfaces and monitor them using PRTG via SNMP. The Splunk ser... See more...
Our company uses PRTG to monitor all network traffic/bandwidth. We are currently deploying Splunk and would like to add Splunk network interfaces and monitor them using PRTG via SNMP. The Splunk servers are Linux. Can we install the SNMPD on these Splunk Linux servers to poll the network bandwidth utilization?
Search peer xxx(servername) has the following message: Received event for unconfigured/disabled/deleted index=\xC2\xA0my_data with source="source::/opt/mylogs/apache/logs/xxx.logs" host="host::server... See more...
Search peer xxx(servername) has the following message: Received event for unconfigured/disabled/deleted index=\xC2\xA0my_data with source="source::/opt/mylogs/apache/logs/xxx.logs" host="host::servername" sourcetype="sourcetype::xxx.logs". So far received events from 1 missing index(es). Above is the message banner which I see once I restart the forwarder. I created index name as my_data and I see other source is already loading to mentioned index. These are apache tomcat logs.
Our deployment is currently looking to integrate the DB connect app to pull in data from the databases and to use in dashboards and so forth. However, we want the data to only be pulled from the data... See more...
Our deployment is currently looking to integrate the DB connect app to pull in data from the databases and to use in dashboards and so forth. However, we want the data to only be pulled from the database, with no writing or otherwise modifying the current data. Is there a way to disable the export functionality directly? Or would we just have to set the service account created for the identity to have minimum read-access? https://apps.splunk.com/apps/id/splunk_app_db_connect
System Microsoft Graph Security API Add-On for Splunk v1.1.0 Splunk Enterprise 7.1.10 Problem recommendedActions node for Network_TrafficFromUnrecommendedIP events are not valid JSON Deta... See more...
System Microsoft Graph Security API Add-On for Splunk v1.1.0 Splunk Enterprise 7.1.10 Problem recommendedActions node for Network_TrafficFromUnrecommendedIP events are not valid JSON Details The data contained within the recommendedActions node isn’t JSON, which prevents Splunk from being able to properly apply the JSON indexed_extraction. Thus it shows up as a large text string and the contained fields are not reportable. Using any JSON formatting tool, the highlighted row is are the fields in question for correction: "azureSubscriptionId": "48381800-1ec9-4a2f-bbda-a1a1a188a2b1", "recommendedActions": [ "{\"kind\":\"openBlade\",\"displayValue\":\"Enforce rule\",\"extension\":\"Microsoft_Azure_Security_R3\",\"detailBlade\":\"AdaptiveNetworkControlsResourceBlade\",\"detailBladeInputs\":\"protectedResourceId=/subscriptions/48381800-1ec9-4a2f-bbda-a1a1a188a2b1/resourcegroups/freddata-rg/providers/microsoft.compute/virtualmachines/fredcomputebdc1111111\"}" ], "title": "Traffic detected from IP addresses recommended for blocking", After minor regex to remove the invalid characters, that same section above would look like the following: "azureSubscriptionId": "48381800-1ec9-4a2f-bbda-a1a1a188a2b1", "recommendedActions": [ { "kind": "openBlade", "displayValue": "Enforce rule", "extension": "Microsoft_Azure_Security_R3", "detailBlade": "AdaptiveNetworkControlsResourceBlade", "detailBladeInputs": "protectedResourceId=/subscriptions/48381800-1ec9-4a2f-bbda-a1a1a188a2b1/resourcegroups/freddata-rg/providers/microsoft.compute/virtualmachines/fredcomputebdc947035" } ], "title": "Traffic detected from IP addresses recommended for blocking",