All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, everybody! I want to ask something that has already been asked several times but there is still no clear solution. My initial query gives me the set of events, each of these have child_id... See more...
Hello, everybody! I want to ask something that has already been asked several times but there is still no clear solution. My initial query gives me the set of events, each of these have child_id and parent_id fields. Sample data looks like this: child_id | parent_id ******************** null | A1 null | B1 A1 | A2 B1 | B2 A2 | C1 B2 | C1 C1 | C2 C2 | D1 C2 | E1 So, the elements on the bottom of the hierarchy has their child_id = null . The depth of parent-child relationships is not known in advance. I wonder, how can I restore the these events into the hierarchy, so if I set a specific event my search would return to me only this event and all events which are parent events? For example: If I search child_id=B2 I need to get two events for child_id=B2 (root) and child_id=B1 (1 child) as results If I search child_id=C1 I need to get five events for child_id=C1 (root) and child_id=A2 , child_id=B2 , child_id=A1 , child_id=B1 (4 childs) as results, etc. In any words, I need to get chains from the initial data: child_id | chain **************** A1 | A1 A2 | A2 -> A1 B1 | B1 B2 | B2 -> B1 C1 | C1 -> A2 -> A1 C1 | C1 -> B2 -> B1 C2 | C2 -> C1 -> A2 -> A1 C2 | C2 -> C1 -> B2 -> B1 D1 | D1 -> C2 -> C1 -> A2 -> A1 D1 | D1 -> C2 -> C1 -> B2 -> B1 E1 | E1 -> C2 -> C1 -> A2 -> A1 E1 | E1 -> C2 -> C1 -> B2 -> B1 I tried to achieve this with transaction and map but no luck at the moment. Looks like I need a kind of recursion. Is it maybe possible to implement a recursion by search macro, pointing to itself?
I want to append new field with static value to the data during index time. how to create with props.conf/transform.conf and no field extraction is required
Hello, I have following issue: - I am located with most of my users in CET zone. For 2 years until last week all looked fine in respect to the timezone of the users. I know this option is not th... See more...
Hello, I have following issue: - I am located with most of my users in CET zone. For 2 years until last week all looked fine in respect to the timezone of the users. I know this option is not there, but the system, although itself having the UTC, behaved as if it would recognize the users location and presenting the CET in the searches. - Like week back, i have no clue why, the system decided to present the GMT timezone to the users in searches. Well, when I go to the user properties it will be the "Default System Timezone" there, so one could say it behaves as expected. My questions would be: - is there any possibility that Splunk detects the time zone of the user based e.g. on the browser settings? - If not, how would I mass change it for my users to CET? Clicking through the user settings one by one is not much fun - Theoretically I could make a change for all users: etc/system/local/user-prefs.conf [default] # Only canonical timezone names such as America/Los_Angeles are allowed tz = America/Los_Angeles The question is what the correct canonical timezone name for CET would be? tz = Europe/Berlin ? Kind Regards, Kamil
I'm building a dashboard which I want to be able to select only a specific date. I want to have a panel with a calendar look like, so the user can select a distinct date. The "Time" input panel i... See more...
I'm building a dashboard which I want to be able to select only a specific date. I want to have a panel with a calendar look like, so the user can select a distinct date. The "Time" input panel is not the input panel I need since there is no way to restrict it to be able to select only a specific date and not a range of dates. Any advice?
Dear All, Hope you all are doing fine. I am currently working on a dashboard, and would need your help to check if it is possible to split world map into EMEA, APAC and AMER Regions and then map d... See more...
Dear All, Hope you all are doing fine. I am currently working on a dashboard, and would need your help to check if it is possible to split world map into EMEA, APAC and AMER Regions and then map data according to region. For example data related to France country should be mapped to EMEA region, data from Antarctica country should be mapped to APAC region, and so on. Looking forward to your inputs. Thanks !! Regards, Abhi
i have a field name is file_name in that field value is there ex: file_name= Operating System-Linux-Server-Support-GENVE0001VA.gmail.com.au-GEN-Adm02 in this field i want to display only "GE... See more...
i have a field name is file_name in that field value is there ex: file_name= Operating System-Linux-Server-Support-GENVE0001VA.gmail.com.au-GEN-Adm02 in this field i want to display only "GENVE0001VA.gmail.com.au" this value remaining value i dont want please let me know how to write regex in splunk search query
i have a script which will be executed from inputs.conf but i need the script file name in a new field instead of source tag. since i have a default source name configured. i want to add script fil... See more...
i have a script which will be executed from inputs.conf but i need the script file name in a new field instead of source tag. since i have a default source name configured. i want to add script file(source script) Name to the data indexed in the new field. [script:///$SPLUNK_HOME/etc/apps/KIO/bin/Stats.py] interval = * * * * * source = siebel sourcetype = inflowstats disabled = False index = index1 host=server1 Script=ScriptName
Hi, I'm using Splunk 8.0.3 with DB connect 3.3.0. After the installation of DB connect, I'm trying to configure DB connect. However, the "welcome" screen keeps "loading" and stays blank: http:/... See more...
Hi, I'm using Splunk 8.0.3 with DB connect 3.3.0. After the installation of DB connect, I'm trying to configure DB connect. However, the "welcome" screen keeps "loading" and stays blank: http://localhost:8000/de-DE/app/splunk_app_db_connect/ftr#/welcome Any suggestions? I couldn't find anything using google/ splunk answers. thanks!
Hi I'm trying to deploy the splunk connector on my kubernetes cluster. Here is my config file global: splunk: hec: token: 16e1174f-0989-4410-b801-225ff63ef7b8 host: srv... See more...
Hi I'm trying to deploy the splunk connector on my kubernetes cluster. Here is my config file global: splunk: hec: token: 16e1174f-0989-4410-b801-225ff63ef7b8 host: srvinf.coolcorp.priv port: 8088 indexName: idx_k8s_logs insecureSSL: true kubernetes: # connection to kubernetes is insecure insecureSSL: true splunk-kubernetes-metrics: splunk: hec: indexName: idx_k8s_metric I deploy the solution with the command "helm install splunk-collector -f splunk-conf.yaml https://github.com/splunk/splunk-connect-for-kubernetes/releases/download/1.4.0/splunk-connect-for-kubernetes-1.4.0.tgz" Unfortunately, some pod crashes on a loop. root@k8s-worker-001:/home/bvivi57# docker logs ce9940e8f4d2 2020-04-21 07:47:21 +0000 [info]: parsing config file is succeeded path="/fluentd/etc/fluent.conf" 2020-04-21 07:47:21 +0000 [info]: gem 'fluent-plugin-jq' version '0.5.1' 2020-04-21 07:47:21 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '2.4.2' 2020-04-21 07:47:21 +0000 [info]: gem 'fluent-plugin-prometheus' version '1.7.0' 2020-04-21 07:47:21 +0000 [info]: gem 'fluent-plugin-record-modifier' version '2.1.0' 2020-04-21 07:47:21 +0000 [info]: gem 'fluent-plugin-splunk-hec' version '1.2.1' 2020-04-21 07:47:21 +0000 [info]: gem 'fluentd' version '1.9.1' 2020-04-21 07:47:22 +0000 [error]: config error in: @type splunk_hec data_type metric metric_name_key "metric_name" metric_value_key "value" protocol hec_host "http://srvinf.coolcorp.priv" hec_port 8088 hec_token "5285bf89-5c6c-4ca4-82de-71ce95d227fc" host "k8s-worker-001" index "idx_k8s_metric" source "${tag}" insecure_ssl true @type memory chunk_limit_records 10000 chunk_limit_size 100m flush_interval 5s flush_thread_count 1 overflow_action block retry_max_times 3 total_limit_size 400m 2020-04-21 07:47:22 +0000 [error]: config error file="/fluentd/etc/fluent.conf" error_class=Fluent::ConfigError error="valid options are http,https but got " I can't understand my mistake. Can you help me?
Hello, everyone. I has been restored ITSI Content Pack in a fresh install ITSI. But, after i restore. It does not have entities. There are many services in it, but each one only have one KPI. ... See more...
Hello, everyone. I has been restored ITSI Content Pack in a fresh install ITSI. But, after i restore. It does not have entities. There are many services in it, but each one only have one KPI. Because at first, i think i will get many entities and services in it. And each service have many defined KPI in it. But, i was wrong. So, the question is. Are there another ITSI Content Pack that have full entities and services, instead of i use right now. I need that, because i will use it as base for implementing ITSI. Thank you
Hello, I have a table: time available ------ ----------- 09:00 OK 09:05 time_out 09:10 time_out 09:15 OK 09:20 OK 09:25 OK 09:30 ... See more...
Hello, I have a table: time available ------ ----------- 09:00 OK 09:05 time_out 09:10 time_out 09:15 OK 09:20 OK 09:25 OK 09:30 timeout 09:35 OK 09:40 OK 09:45 time_out 09:50 time_out 09:55 time_out 10:00 OK What I need is to select only records, where at least 3 continuos records have the availability == time_out. So in this case the correct output from Select would be: 09:45 09:50 09:55 Any idea how to perform this kind of search? thank you very much
I am new in to splunk and i need to create dashboard with chart which can show average of one field on daily basis and display into chart. Suppose yesterday average (execution time) for all event... See more...
I am new in to splunk and i need to create dashboard with chart which can show average of one field on daily basis and display into chart. Suppose yesterday average (execution time) for all event is 2.9 seconds day before yesterday its 2.8 seconds likewise I need to show a chart for all previous days may be last 1 months/10 days with average in chart. Below is my query for chart <title>UI Source Type Trends</title> <chart> <search> <query>index=abc sourcetype="sfdc:logfile" $userId$ $recordId$ | search EVENT_TYPE="ApexExecution" | table EXEC_TIME </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="charting.chart">line</option> <option name="charting.drilldown">none</option> <option name="height">119</option> <option name="refresh.display">progressbar</option> </chart> </panel> Please help
Hi, I'm using snmp_ta with the newest version 1.5 with an eval key. We have set up the snmp configuration in splunk successfully. The stanza looks as follows: [snmp://OtcsMaThreadPerforman... See more...
Hi, I'm using snmp_ta with the newest version 1.5 with an eval key. We have set up the snmp configuration in splunk successfully. The stanza looks as follows: [snmp://OtcsMaThreadPerformance] activation_key = 91C3A8052D3B6BB033AC165FDF24462E destination = host do_bulk_get = 0 do_get_subtree = 1 index = otcs ipv6 = 0 mib_names = MONITORING_AGENT_MIB object_names = .1.3.6.1.4.1.14876.4.2.1.1 port = 162 response_handler = MonitoringAgentResponseHandlerThread snmp_mode = attributes snmp_version = 2C snmpinterval = 60 sourcetype = OtcsMaThreadPerformance split_bulk_output = 1 trap_rdns = 0 v3_authProtocol = usmHMACMD5AuthProtocol v3_privProtocol = usmDESPrivProtocol host = host disabled = 0 I can see that Splunk is receiving specific data (such as host name information), but all other information has 0 values. see screenshot: https://ibb.co/Zg6BzrJ In the past, I managed to configure snmp for this solution, so I have working examples.
How can we export 'Data inputs » Intelligence Downloads' & 'Content Management' pages as CSV?
About our architecture - All of our UFs send data to one UF. We call it Intermediate Universal Forwarder. (IUF) IUF receives data and forwards it to splunkcloud. IUF is our gateway to splunk... See more...
About our architecture - All of our UFs send data to one UF. We call it Intermediate Universal Forwarder. (IUF) IUF receives data and forwards it to splunkcloud. IUF is our gateway to splunkcloud. Goal- I am building a Disaster Recovery component of this IUF. When there is No DR Scenario in place, IUF needs to send only _internal logs to splunkcloud but when there is DR Scenario, it needs to send all logs to splunkcloud. This way I will be able to track the UF status on all DR nodes as well and won't consume license from them when there is no DR Scenario in place. If I can figure out how to send only _internal logs to splunk, I could bundle this configuration into a DR-Control app into the IUF. How do I configure a UF to send only _internal logs (Both it's own and forwarded to it by other UFs) to it's default outputs.conf location (which in our case is splunkcloud) and discard all other data to null queue?
Hello Team, I am new to the Splunk environment and slowly i learnt reports and dashboard creations along with graphs. I have a new requirement now i.e I need to capture the splunk graph result in ... See more...
Hello Team, I am new to the Splunk environment and slowly i learnt reports and dashboard creations along with graphs. I have a new requirement now i.e I need to capture the splunk graph result in Power point of common location. Since we have multiple reports (30 reports) on every 2 days we need to capture the graphs data and store it in power point but this process is taking around 1hour. I heard that we can capture this reports or graphs automatically by using power shall commands and keep that screenshot in common location. So my question is can we capture splunk graphs using power shall commands & do we have any command for this process to complete. Please suggest me to achieve this task.
Hi, I have a search that returns hundreds of results. Each result contains a field called jobName and another field called server I need to build a table with 2 columns from a fixed set of JobName... See more...
Hi, I have a search that returns hundreds of results. Each result contains a field called jobName and another field called server I need to build a table with 2 columns from a fixed set of JobNames. However, if one of the JobName in my defined list is not found in the search result, I need to return 'Not found' Table should look like: JobName server JobA Server1 JobB Server2 JobE Not found JobX Server6 Can anyone please help on how to generate the output above? Thanks
Hello, i'm doing a report (splunk 7.3) in which I need to append some counts in the first row of the table im generating. this is my query: myquery | dedup ID| eval DLP_installed=if(match(myDL... See more...
Hello, i'm doing a report (splunk 7.3) in which I need to append some counts in the first row of the table im generating. this is my query: myquery | dedup ID| eval DLP_installed=if(match(myDLPTAG, "yes", "no"), DLP_rules_installed=if(match(myDLPrules, "yes", "no") | stats values(DLP_installed) as dlpinstalled values(DLP_rules_installed) as "rules_installed" values(Tags) AS Tags by ID, FQDN I need to append a row containing count(eval(DLP_installed=="yes")) as first row. any ideas? Thanks
Hi, In the Splunk configs does true/false means 1/0 ?? example: In transforms.conf we have MV_ADD = [true|false] , can we use MV_ADD = [1|0] in the CLI ? Does it imply the same ?... See more...
Hi, In the Splunk configs does true/false means 1/0 ?? example: In transforms.conf we have MV_ADD = [true|false] , can we use MV_ADD = [1|0] in the CLI ? Does it imply the same ? Thanks
We have couple of indexers in distributed environment. What would happen if I bring both the indexers down. Is there a data loss and I will miss all data coming in from the UF's? If so what is the b... See more...
We have couple of indexers in distributed environment. What would happen if I bring both the indexers down. Is there a data loss and I will miss all data coming in from the UF's? If so what is the best way to make sure I don't lose any new data?