All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Guys, I have the following query and query result, i am struggling to show it in graph: index=infra_apps sourcetype=ca:atsys:edemon:txt | search Job=* | rename hostname as host | eval time... See more...
Hi Guys, I have the following query and query result, i am struggling to show it in graph: index=infra_apps sourcetype=ca:atsys:edemon:txt | search Job=* | rename hostname as host | eval time_epoch=strftime(_time,"%Y-%m-%d %H:%M:%S") | fields Job host Autosysjob_time Status _time time_epoch | lookup datalakenodeslist.csv host OUTPUT cluster | mvexpand cluster | table Job Status host cluster _time time_epoch | search cluster=* AND host=* | sort + time_epoch | stats count by _time Job Status host cluster time_epoch | bin span=2m time_epoch | makecontinuous _time span=2m | filldown _time Job Status host cluster count time_epoch Query result: _time Job Status host cluster time_epoch count 3/3/2020 8:00 1CDH_ING_NBC_ACCT_MSTR_DY_CURR_HG STARTING XXXX edl-prd-m01 43893.33337 1 3/3/2020 8:00 1CDH_ING_NBC_ACCT_OB_PRIM_CK_DY_TMPRL_BMG STARTING XXXX edl-prd-m01 43893.33338 1 3/3/2020 8:00 1CDH_ING_NBC_EVNT_CUST_ID_CHG_HY_HIST_CIS RUNNING XXXXX edl-prd-m01 43893.33372 1 3/3/2020 8:00 1CDH_ING_NBC_EVNT_CUST_PH_CHG_HY_HIST_CIS RUNNING XXXX edl-prd-m01 43893.33372 1 3/3/2020 9:00 1CDH_ING_NBC_EVNT_CUST_PH_CHG_HY_HIST_CIS RUNNING XXXX edl-prd-m01 43893.33372 1 Now i am struggling to show how many jobs are running or starting at each minute , can you please help
Hi, is it any possible way to customize dashboard visualization using Splunk Cloud Gateway? I know that you can select the Apps to be seen on all granted mobile devices, but if'd like to let an App b... See more...
Hi, is it any possible way to customize dashboard visualization using Splunk Cloud Gateway? I know that you can select the Apps to be seen on all granted mobile devices, but if'd like to let an App be seen only on a specific mobile device and not on another? Is it any way to set-up this feature? With the old mobile app this was possible because login used the user-id, but now that you have the device id how can I do it? Thanks and regards
I am in the process of designing an implementation of Splunk using version 7.3.4 (for organizational reasons) and the first time I opened Splunk Web I received a notice about the Python 3.7 migration... See more...
I am in the process of designing an implementation of Splunk using version 7.3.4 (for organizational reasons) and the first time I opened Splunk Web I received a notice about the Python 3.7 migration. Is this applicable to version 7.3.4 or only for 8.x?
Please help me in detecting the below scenarios for alerting. 1) If a UF stops forwarding the actual source logs (Example: Windows Event Logs ) but it is forwarding the _internal logs. 2) if a UF ... See more...
Please help me in detecting the below scenarios for alerting. 1) If a UF stops forwarding the actual source logs (Example: Windows Event Logs ) but it is forwarding the _internal logs. 2) if a UF stops forwarding actual source logs and _internal logs. 3) How to find whether the UF is reporting to Deployment Server from Search Head. Also, please let me know the solution/process for the above if a) HF is configured in the Splunk Environment. b) When HF is not configured in the Splunk Environment. I have checked the app UFMA app in Splunk Base. But I don't know whether it can fulfill my needs as I can't make deployment server as a search peer which is required for the UFMA app to function.
I'm querying very large data sets from Splunk several times a day. During days with a lot of data, I'll get an OOM on the search head. Some notes: Pulled via the export REST endpoint Only cont... See more...
I'm querying very large data sets from Splunk several times a day. During days with a lot of data, I'll get an OOM on the search head. Some notes: Pulled via the export REST endpoint Only contain distributed streaming commands (until the end) Last distributed streaming command is "fields" to limit data coming back from the search head Last item is a "table" command to get CSV output Already querying in the smallest reasonable timeframe for business requirements The problem is that "table" is not a streaming command. Is there a command that will take the streaming results, format them as a CSV (header row, data rows), and pass them on without buffering the entire result set on the search head? The expected columns and column order are already defined, so it should just be a filter on the search head to stream the header, then format each record as it's returned from the indexers.
Hi all! Ive got a strange problem with data loss,but not all - its just for a peroid of time. Here is example of my index: coldPath = $SPLUNK_DB/myindex/colddb enableDataIntegrityCo... See more...
Hi all! Ive got a strange problem with data loss,but not all - its just for a peroid of time. Here is example of my index: coldPath = $SPLUNK_DB/myindex/colddb enableDataIntegrityControl = 0 enableTsidxReduction = 0 homePath = /mnt/db/myindex/db maxTotalDataSizeMB = 35840 thawedPath = $SPLUNK_DB/myindex/thaweddb frozenTimePeriodInSecs = 15552000 bucketRebuildMemoryHint = 0 compressRawdata = 1 enableOnlineBucketRepair = 1 minHotIdleSecsBeforeForceRoll = 0 rtRouterQueueSize = rtRouterThreads = suspendHotRollByDeleteQuery = 0 syncMeta = 1 archiver.enableDataArchive = 0 Now I can see information at 21.02. and next one starts at 02.03 ,but ive seen that information was there for all week before today. What can it be? Ive already checked bucketmover - nothing much,2 buckets deleted but with infromation from long ago.
Hello, I have the below query which works fine: {My search} | rename user_id as User | stats max(asctime) as "Last login time (UTC)" by User | table User "Last login time (UTC)" Now from ... See more...
Hello, I have the below query which works fine: {My search} | rename user_id as User | stats max(asctime) as "Last login time (UTC)" by User | table User "Last login time (UTC)" Now from the table result, I want to get only the raws where "Last login time (UTC)" is 4 months ago or older. Many thanks for your help!
Hello, thanks for your answer I use juniper vpn, how to know which applications employees are working on when they are homeoffice? I've tried searching juniper logs, I can't find that informat... See more...
Hello, thanks for your answer I use juniper vpn, how to know which applications employees are working on when they are homeoffice? I've tried searching juniper logs, I can't find that information. Thanks
Hi , I have created few dashboards but sharing is private , how can i make these dashboards shared with others or what role should i have to share these dashboards. Note: In Action-Edit it is n... See more...
Hi , I have created few dashboards but sharing is private , how can i make these dashboards shared with others or what role should i have to share these dashboards. Note: In Action-Edit it is not giving option to edit permission. By default my dashboards Sharing is private. Thanks, Ranganath.G
Hello, We would like to run a correlation search every 15 minutes but only out of working hours. It means from 6pm to 8am on weekdays and 24 hours on weekends. We thought about the cron below: ... See more...
Hello, We would like to run a correlation search every 15 minutes but only out of working hours. It means from 6pm to 8am on weekdays and 24 hours on weekends. We thought about the cron below: 14-59/15 18-23,0-7 * * * However, in this case, we do not cover 8am-6pm scope on weekends, which is not good. Do you have an idea which cron we should use? Thanks for the help.
I have a lookup file to add additional fields to events. When running the "inputlookup" command I can see all the fields (4) just fine, but when running a search I see just 3 values from the 4 val... See more...
I have a lookup file to add additional fields to events. When running the "inputlookup" command I can see all the fields (4) just fine, but when running a search I see just 3 values from the 4 values in the table. I've checked multiple times the spelling, removed and added the lookup but I still see just part of the lookup data. Does anyone have an idea? Thank you.
Hello All, We having an issue on the ePO version 5.10, Tables are changed. Whenever we are trying to execute the given query it is throwing an error java.sql.SQLException: Invalid object name 'EPO... See more...
Hello All, We having an issue on the ePO version 5.10, Tables are changed. Whenever we are trying to execute the given query it is throwing an error java.sql.SQLException: Invalid object name 'EPOLeafNode'. So can you please provide us the compatible query for ePO version(5.10)
if Splunk Add-on for Kafka support splunk 8.0 ? we are using splunk 8.2 for test ,we want to get data from our IT's kafka, but they don't use kafka connector, so i install Splunk Add-on for Kafka ... See more...
if Splunk Add-on for Kafka support splunk 8.0 ? we are using splunk 8.2 for test ,we want to get data from our IT's kafka, but they don't use kafka connector, so i install Splunk Add-on for Kafka and config the moduler input , but there is no data in my splunk, i want to confirm if Splunk Add-on for Kafka still support splunk 8.0 or upper version?
Hi all I use a lookup file with a mix of ranges of IP and unique IP to count events of login My file is like this ip,entity 10.0.1.0/24, A 10.0.2.0/24, B 12.0.0.4,C 12.0.0.8,C I c... See more...
Hi all I use a lookup file with a mix of ranges of IP and unique IP to count events of login My file is like this ip,entity 10.0.1.0/24, A 10.0.2.0/24, B 12.0.0.4,C 12.0.0.8,C I configure my lookup file with CIDR option but the result of search only extract the events of ip integrated in ranges of ip. I woulld like to extract all the results how can I solved this with only one lookup file ?
Hello, I have following JSON data coming in: { "event_timestamp" : "2020-03-03 T 12:56:54 +0200", "file_timestamp" : "", "username" : "xxxx", "session_id" : "F23AA957F1A494C12F2... See more...
Hello, I have following JSON data coming in: { "event_timestamp" : "2020-03-03 T 12:56:54 +0200", "file_timestamp" : "", "username" : "xxxx", "session_id" : "F23AA957F1A494C12F2B21B5A7533FF3", "request_id" : "74b9cf97-934c-41cb-b81e-1152f51e28b7", "register_id" : [ ], "system_id" : "ASDFG", "environment" : "LINUX", "service_id" : "12355", "parameters" : [ { "field" : "xxx", "value" : "xx-123", "search" : false, "securityProhibition" : false }, { "field" : "yyy", "value" : "yy-564", "search" : false, "securityProhibition" : false }, { "field" : "zzz", "value" : "1234433222", "search" : false, "securityProhibition" : false }, { "field" : "vvv", "value" : "www.google.com", "search" : false, "securityProhibition" : false }, { "field" : "qqq", "value" : "qwert", "search" : false, "securityProhibition" : false } ], "info" : null, "error" : [ { "code" : "202", "message" : "General Error" } ], "schema_version" : "1.0" }; I have Dashboard where users can make searches based on given values. For example, users can search events selecting yyy (dropdown) and giving value "yy-564" and Splunk tries to search all events where that can be found. For example here I populate the search like this: index=myindex (parameters{}.field="yyy" AND parameters{}.value="yy-564").. That works but it also finds the events where that value "yy-564" is on another parameter field like in zzz. Any Ideas on how should I make this to work the correct way. So that It would only match inside parameters field "yyy" and it's corresponding value "yy-564"? Thanks
(That is: in a Splunk environment where data is stored in multiple indexes.) I'm almost certain that the answer is "Yes". I've seen what could be interpreted as "Yes" answers inline in other quest... See more...
(That is: in a Splunk environment where data is stored in multiple indexes.) I'm almost certain that the answer is "Yes". I've seen what could be interpreted as "Yes" answers inline in other questions. For example, this statement in 2016 by @somesoni2 : it's always advisable, from performance perspective, that you specify the index you need to search, to cut down the number of indexes/buckets to be searches. However, I wanted a specific question for this, preferably with affirmative answers from Splunk or prominent Splunk users, that I could refer to (and point other people to). I'm asking this question because I have learned that, as a fix for failing the AppInspect check check_indexes_conf_does_not_exist , a fellow app developer both (a) removed indexes.conf from their app and (this is specifically why I'm asking this question) (b) removed all index=... constraints from the searches in the app. So the app now constrains its searches by sourcetype , but not index . I suspect this means that Splunk will search all indexes that the user is allowed to search. Not ideal, from a performance perspective. Unless—this is just a thought bubble; I have no evidence for this in reality—Splunk somehow keeps track of which sourcetypes are stored in which indexes: when a search specifies a sourcetype, Splunk implicitly constrains the search to only those indexes that it knows contains those sourcetypes. I doubt that this is true, but I wanted to at least raise this as a possibility; preferably, for someone to explicitly state as false in their answer.
i have a question, i installed fortinate app and add-on on a SHcluster using the deployer but the app was not visable.. when i try to make it visible it only allow me to do so in the captain and i ca... See more...
i have a question, i installed fortinate app and add-on on a SHcluster using the deployer but the app was not visable.. when i try to make it visible it only allow me to do so in the captain and i cannot change it in the other cluster member .. can any one tell me the correct way to do so ?
I am in a X time zone which can be any time zone. The splunk indexers are in EST time zone. I want to search all the events from the current time since the midnight in the EST time zone. Question i... See more...
I am in a X time zone which can be any time zone. The splunk indexers are in EST time zone. I want to search all the events from the current time since the midnight in the EST time zone. Question is how do I specify the time range using time modifier. If I use latest=now and earliest=@d in my search query , the @d is the midnight time of my current time zone and not the EST timezone. How do I create a time range for search query which says "Give me the events since midnight in EST to current time in EST". Note: This query should run irrespective of the timezone of the user running it. Basically we want to search the events in EST being in a diff time zone.
Except from an AppInspect report: [ Failure Summary ] Failures will block the Cloud Vetting. They must be fixed. check_all_lookups_are_used Lookup file my_trans.csv is not referenced in tr... See more...
Except from an AppInspect report: [ Failure Summary ] Failures will block the Cloud Vetting. They must be fixed. check_all_lookups_are_used Lookup file my_trans.csv is not referenced in transforms.conf. File: default/transforms.conf The report is correct: my_trans.csv (not its real name) is not referenced in transforms.conf . However, my_trans.csv is referenced by a macro in the app. From the app's macros.conf : [myapp_exclude_my_trans] definition = NOT [|inputlookup my_trans.csv] From the description of this check in the AppInspect docs: Check that all files in the /lookups directory are referenced in transforms.conf. Why must files in the /lookups directory be referenced in transforms.conf ? Do I really need to add: [mylookuptable] filename = my_trans.csv just to satisfy AppInspect?
I have Two Questions: 1st Questions: Below is the query to generate stats that I want to push into Summary Index: index="myIndex" host="myHost" source="/var/logs/events.log" sourcetype="ss:vv:eve... See more...
I have Two Questions: 1st Questions: Below is the query to generate stats that I want to push into Summary Index: index="myIndex" host="myHost" source="/var/logs/events.log" sourcetype="ss:vv:events" (MTHD="POST" OR MTHD="GET") | rex field=U "(?P[^\/]+)(\/([a-z0-9]{32})|$)" | search (ApiName=abc OR ApiName=xyz) | dedup CR,RE | stats count as TotalReq by ApiName, Status | xyseries ApiName Status, TotalReq | addtotals labelfield=ApiName col=t label="ColTotals" fieldname="RowTotals" It gives me perfect result as: ApiName | 200 | 400 | 404 | 500 | RowTotals abc | 12 | 2 | 4 | 1 | 19 xyz | 10 | 3 | 2 | 2 | 17 ColTotals | 22 | 5 | 6 | 3 | 36 But when I am changing stats to sistats to push into Summary Index, it is not producing any result, please help me with the query. 2nd Question: I already have a Summary Index available and one stats report with different query is already been pushed everyday, which I have annotated using Add Fields option in Edit Summary Index window as report = firstReport, now can I push another (above) report into same Summary Index with different annotation as report = secondReport? will it work or I have to create another Summary Index for this report also, Please help.