All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am looking for the integration between Splunk and the iVanti ITSM tool. Is there any OOTB or API available to integrate them?
Hi fellow splunkers, maybe my question was not good enough. It would be a sufficient answer if someone could provide me a few links to read about splunktcp tokens. At the moment i only have: ... See more...
Hi fellow splunkers, maybe my question was not good enough. It would be a sufficient answer if someone could provide me a few links to read about splunktcp tokens. At the moment i only have: https://docs.splunk.com/Documentation/Forwarder/7.3.5/Forwarder/Controlforwarderaccess Thanks and Best regards, vess Hi all, i need authentication enabled for my forwarders/indexers on the listening tcp 9997 port. This is important for us cause we want to open this port on a DMZ intermediate forwarder (universal forwarder). The DMZ Intermediate Forwarder sends the data through a firewall to my indexer in the intranet. If searched the splunk doku and found only one document: https://docs.splunk.com/Documentation/Forwarder/7.3.5/Forwarder/Controlforwarderaccess (In this doc is a typo "Enable a token" -> in the command change 'tok1' to 'my_token' ) Here i have a few questions: How can i see all existing tokens on my indexer/forwarder? From the documentation i can only (create, enable, disable and delete) Can i manage tokens on my clients (forwarders) via a deployment server? On my indexer: After i create a token (which is directly enabled) all other incoming splunktcp traffic is blocked. Can i activate tokens only for a specific input? Like a separate input on [splunktcp://9998] - traffic on tcp 9997 should work without tokens. This is what i get after creating a token (which is directly active by the way): `04-17-2020 14:59:52.871 +0200 ERROR TcpInputProc - Error encountered for connection from src=10.x.x.x:51116. Local side shutting down host = testforwarder source = /opt/splunk/var/log/splunk/splunkd.log sourcetype = splunkd ` Thanks all, best regards Michele
Hi Friends, I have a drop-down filter in dashboard. If that filter is not populating any value, then while loading the dashboard itself that filter shouldn't be visible in dashboard and tokens us... See more...
Hi Friends, I have a drop-down filter in dashboard. If that filter is not populating any value, then while loading the dashboard itself that filter shouldn't be visible in dashboard and tokens used in the panel query(dashboard) for that filter shouldn't be active.. Wise versa once the data is there in that filter, filter need to be shown in dashboard and tokens used in the panel query(dashboard) for that filter should be active.. Please help me on achieving it .. Thanks in advance ..
Hello, I'm using Entreprise security glass tables to show IT security indicators. Is it possible to export ES glass tables to pdf format? Thanks
Hi Team, We have a python script which runs and executes the results of the knowledge objects of rest apis. We are going to run this script on adhoc basis which will modify and list the permission... See more...
Hi Team, We have a python script which runs and executes the results of the knowledge objects of rest apis. We are going to run this script on adhoc basis which will modify and list the permissions. Do we have to use custom search command for that? If yes then please let us know the approach...
Hi, how do I sum multiple columns using multiple columns? For instance, my data looks like this: How do I get two columns with just Name and Quantity that would combine the results in the ta... See more...
Hi, how do I sum multiple columns using multiple columns? For instance, my data looks like this: How do I get two columns with just Name and Quantity that would combine the results in the table? Essentially: Name Quantity Car 3 Plane 2 and etc. Thank you!
Hi, I have exactly same issue as below https://answers.splunk.com/answers/513703/json-breaking-single-string-into-multiple-events.html so i added below to my props.conf file : [_json_sour... See more...
Hi, I have exactly same issue as below https://answers.splunk.com/answers/513703/json-breaking-single-string-into-multiple-events.html so i added below to my props.conf file : [_json_source] CHARSET=UTF-8 DATETIME_CONFIG = CURRENT KV_MODE = json TRUNCATE = 0 SEDCMD-fixfooters=s/]}//g LINE_BREAKER = ([\r\n,]*(?:{[^[{]+[)?){"teamInCharge SHOULD_LINEMERGE = false NO_BINARY_CHECK = true disabled = false pulldown_type = true The events are breaking correctly but the first line is not coming properly, only as raw text and not in json format. Could you please help. The different between the raw text between the 1st event and 2nd event i saw is at the end of the line. 1st line ends with below showing other fields along with raw text: {"teamInCharge":[], bla bla,"serialNumber":""}] 2nd lines ends with below showing syntax highlighted part also {"teamInCharge":[], bla bla,"serialNumber":""} Please help.
Hi guys, I am unable to run tstats command against the sub-dataset in a datamodel. Whenever I try to, it throws below error: Error in 'DataModelCache': Invalid or unaccelerable root object for d... See more...
Hi guys, I am unable to run tstats command against the sub-dataset in a datamodel. Whenever I try to, it throws below error: Error in 'DataModelCache': Invalid or unaccelerable root object for datamodel I am not even using the summariesonly in my query for the Datamodels to be accelerated. (Its accelerated though..!!). | from datamodel:Intrusion_Detection.Network_IDS_Attacks | stats count Above query gives me right answer, however when I use tstats like in below query, it all goes haywire. | tstats count from datamodel=Intrusion_Detection.Network_IDS_Attacks Could someone point out to me what is it I'm doing wrong?
hi, I have a query with the below mentioned resultset logger: com.optum.bh.benefit.plan.api.BhBenefitPlansResource message: bhben-plan-api:bHPlanView(), env=prod packageId = 1438939 timeUs... See more...
hi, I have a query with the below mentioned resultset logger: com.optum.bh.benefit.plan.api.BhBenefitPlansResource message: bhben-plan-api:bHPlanView(), env=prod packageId = 1438939 timeUsed(ms) = 19 properties: { [+] } severity: DEBUG thread: http-nio-8080-exec-5 } Show as raw text host = hec-splunk.optum.commessage = bhben-plan-api:bHPlanView(), env=prod packageId = 1438939 timeUsed(ms) = 19source = bhwebservice.logsourcetype = cba_shared_components:scwebservice:error_log Need to extract timeUsed(ms) field so that I can build a table for the elapsed time for the requests
I just want to create csv file automatically everyday for example, today just is created 20200417.csv tomorrow will be created 20200418.csv ... Is it possible??
Apologies in advance if I'm mixing some terminology. I'm relatively new to Splunk. I'm building a Splunk app to monitor our product, Mattermost. We expose Prometheus style metrics and I'm using ... See more...
Apologies in advance if I'm mixing some terminology. I'm relatively new to Splunk. I'm building a Splunk app to monitor our product, Mattermost. We expose Prometheus style metrics and I'm using this Prometheus data input type by @luke.monahan@rivium.com.au to get the metrics in (thanks Luke!). We have a metric mattermost_db_master_connections_total that is displayed in a Single Value chart at the top of a dashboard, as well as in amongst some time series charts below. The time series chart seems to match with what I see in our equivalent Grafana dashboard, but the Single Value stat seems to bounce between 0 and the value I would expect, depending on the timing of when I refresh the dashboard. Is there something I should be doing in my query to smooth out those drops to 0 on the single value panel? What is happening here? Missing values I don't see on the time series? Single Value Query: | mstats max(_value) prestats=true WHERE metric_name="mattermost_db_master_connections_total" AND sourcetype=prometheus:metric span=15s | timechart max(_value) span=15s Time Series Query: | mstats max(_value) prestats=true WHERE metric_name="mattermost_db_master_connections_total" AND sourcetype=prometheus:metric span=15s BY host | timechart max(_value) span=15s agg=max useother=false BY host | addtotals Screenshots: Single Value (after refreshing... and seemingly majority of the time it looks like this... every few refreshes I'll get 3 as expected): Time Series Plot:
Hi all, I have found all schedule searches are running on EST instead of CET timezone, if i go and props.conf in /system/local the Timezone showing TZ=UST . could you please help me how to... See more...
Hi all, I have found all schedule searches are running on EST instead of CET timezone, if i go and props.conf in /system/local the Timezone showing TZ=UST . could you please help me how to set CET time zone instead of EST.
I have a query which essentially looks like this, | makeresults count=1 | eval host="host1, host2, host3, host4, host5, host6" | makemv tokenizer="([^,]+),?" host | mvexpand host | fields - _... See more...
I have a query which essentially looks like this, | makeresults count=1 | eval host="host1, host2, host3, host4, host5, host6" | makemv tokenizer="([^,]+),?" host | mvexpand host | fields - _time | join type=left host [ search index=someIndex host IN (host1, host2, host3, host4, host5, host6) | stats count as numEvents, first(field1) as field1Val, first(field2) as field2Value by host ] As one can see I have to pass the list of hosts "host1, host2, host3, host4, host5, host6", once during "makeresults" and another time in the sub search. Is there any way to declare a variable for this list. Is there a way to avoid this duplication. Sometimes this list of hosts can be really long. I want to send this query to a non-IT user who doesn't understand Splunk too well and was wondering if I can reduce the hassle for him. Thanks, Ashish
I am currently trying to create a SPL query to detect any suspicious lateral Movement to be detected from windows logs. I have created query to detect user activity on multiple device but unable to... See more...
I am currently trying to create a SPL query to detect any suspicious lateral Movement to be detected from windows logs. I have created query to detect user activity on multiple device but unable to get any luck on Lateral movement query.
Hello I have a search with an MV Value this is called HeartBeatTime. I like to create an allert when the HeartBeatTime is over 5 Minute. My question is how can I get the time diff about _time and Hea... See more...
Hello I have a search with an MV Value this is called HeartBeatTime. I like to create an allert when the HeartBeatTime is over 5 Minute. My question is how can I get the time diff about _time and HeartBeatTime? Here is my search: index=temp host="ctw-prod-qa" | rex max_match=5 "serviceUserName=\"(?[^\"])" | rex max_match=5 "serviceIPAddress=\"(?[^\"])" | rex max_match=5 "serviceStartupTime=\"(?[^\"])" | rex max_match=5 "serviceStatus=\"(?[^\"])" | rex max_match=5 "serviceHeartBeatTime=\"(?[^\"])" | eval User_Number = mvcount(UserName) | eval TimeDiff=_time - strptime(HeartBeatTime,"%Y-%m-%d %H:%M:%S.%3N") | table _time,UserName,Status, HeartBeatTime,TimeDiff, IPAddress, User_Number | eval final_User_Number=if(isnotnull(User_Number),User_Number,0)
Hi Splunkers, I have donut visualisation in my dashboard (screenshot attached). Since we do not have drilldown functionality inbuilt in https://splunkbase.splunk.com/app/3238/ I have written js... See more...
Hi Splunkers, I have donut visualisation in my dashboard (screenshot attached). Since we do not have drilldown functionality inbuilt in https://splunkbase.splunk.com/app/3238/ I have written js for click events on donut slice. I have tried with submitted, unsubmitted logic, used set and unset logic too, but doesnt seems to resolve my issue. here is my js: require(["splunkjs/mvc", "jquery", "splunkjs/ready!", "splunkjs/mvc/simplexml/ready!" ], function( mvc, $){ var defaultTokenModel=mvc.Components.get("default"); var submittedTokenModel=mvc.Components.get("submitted"); $(document).on("click","#payment_css g.c3-chart-arc.c3-target-Warning",function(){ defaultTokenModel.set("color","Warning"); submittedTokenModel.set("color","Warning"); defaultTokenModel.set("alert","mobile"); submittedTokenModel.set("alert","mobile"); }); $(document).on("click","#payment_css g.c3-chart-arc.c3-target-Critical",function(){ defaultTokenModel.set("color","Critical"); submittedTokenModel.set("color","Critical"); defaultTokenModel.set("alert","mobile"); submittedTokenModel.set("alert","mobile"); }); $(document).on("click","#payment_css g.c3-chart-arc.c3-target-Normal",function(){ defaultTokenModel.set("color","Normal"); submittedTokenModel.set("color","Normal"); defaultTokenModel.set("alert","mobile"); submittedTokenModel.set("alert","mobile"); }); $(document).on("click","#freedisk_css g.c3-chart-arc.c3-target-Warning",function(){ defaultTokenModel.set("color","Warning"); submittedTokenModel.set("color","Warning"); defaultTokenModel.set("alert","freedisk"); submittedTokenModel.set("alert","freedisk"); }); $(document).on("click","#freedisk_css g.c3-chart-arc.c3-target-Critical",function(){ defaultTokenModel.set("color","Critical"); submittedTokenModel.set("color","Critical"); defaultTokenModel.set("alert","freedisk"); submittedTokenModel.set("alert","freedisk"); }); $(document).on("click","#freedisk_css g.c3-chart-arc.c3-target-Normal",function(){ defaultTokenModel.set("color","Normal"); submittedTokenModel.set("color","Normal"); defaultTokenModel.set("alert","freedisk"); submittedTokenModel.set("alert","freedisk"); }); }); in above js, token: alert is my panel dependent and token color is donut slice dependent. Issue is : i have different table for both of these alert i,e Mobile and freedisk , in my xml i have given if i click on Mobile Panel, the table for freedisk also appears with message "No Result Found", and viceversa. What i want is, on the basis of click, i want to toggle between the tables.
Hello, I have a working Splunk Enterprise and Splunk Universal Forwarder. I am using 2 different CentOS VM Instance. I can successfully forward logs from UF to SE. I can also do search in here. Altho... See more...
Hello, I have a working Splunk Enterprise and Splunk Universal Forwarder. I am using 2 different CentOS VM Instance. I can successfully forward logs from UF to SE. I can also do search in here. Although I wanted to do metrics using Splunk App for infrastructure. I'm quite new to splunk and kind of a bit confused on this part of installation. I installed Splunk App in SE, then installed the Add-on on both SE and UF. I tried the automatic installed using the generated linux command but unfortunately, after downloading that.. my SE Entities are still empty. I've been trying these so many times so now I'm just trying to install in manually. I am following this: https://docs.splunk.com/Documentation/InfraApp/2.0.3/Admin/ManualInstalLinuxUF . Although I wanted to ask if I need to install again a separate UF from this SAI basing from step 1? or the previous step that I did is already enough? (My currently working SE and UF). Furthermore, where should I alter the inputs.conf and outputs.conf? Should it be in the existing UF, additional UF or to the existing SE? I've been struggling with this for quite some time now.. thank you in advance for any help!
Hello, I want to use ITSI Content Pack as a base for implementing ITSI. I also see a documentation about it in https://docs.splunk.com/Documentation/ITSICP/current/Config/About But, it does not t... See more...
Hello, I want to use ITSI Content Pack as a base for implementing ITSI. I also see a documentation about it in https://docs.splunk.com/Documentation/ITSICP/current/Config/About But, it does not tell about how to generate data for ITSI Content Pack. As we know, ITSI need a data to make it run and show service analyzer, Glass Tables, etc. Is it has a way to generate data for ITSI Content Pack ? Or Is it just a template for ITSI so we can use and tune it depending Entities and Service that we want ? So we dont need to start ITSI Implementation from zero. Thank you
Hello, I have a dashboard with a dropdown input filled by a search. One result from this search in "Valid" witch is translated as "Valide" in french. How may I avoid this translation to display ... See more...
Hello, I have a dashboard with a dropdown input filled by a search. One result from this search in "Valid" witch is translated as "Valide" in french. How may I avoid this translation to display "Valid" even in french? Thanks Christian
Dear Splunker, I am trying to implement end to end monitoring where searches have dependecy on multiple lookups and those lookups are dervied from different searches running internally.The idea is... See more...
Dear Splunker, I am trying to implement end to end monitoring where searches have dependecy on multiple lookups and those lookups are dervied from different searches running internally.The idea is to diagnose the situation thouroughly and as early as possible without running all the searches/alerts all the time because there are atleast 60/70 Odd searches in production. Please look at the below example for more clarity. search 1 |inputlookup Lookup1 |rename sys_id AS app_sys_id |lookup Lookup2 parent AS app_sys_id OUTPUTNEW child AS server_sys_id |mvexpand server_sys_id |join server_sys_id [inputlookup Lookup3 | rename sys_id AS server_sys_id] |fields host, server_fqdn, server_status, server_support_group, server_type, server_sox, server_sas, server_admin, server_location, server_environment |outputlookup Lookup1 Also, as we can see here Lookup1 is dependent on 2 lookups internally (Lookup2,Lookup3) Search 2 sourcetype=someother_source2|outputlookup Lookup2 here, this sourcetype will eventually create lookup2 Search 3 sourcetype=someother_source3|outputlookup Lookup3 here, this sourcetype will eventually create lookup3 Basically, It's just an example in production I have more than 30 searches in such a manner,I do not want to create 30 alert and running all unnecessary at 30 Odd timings. I was thinking if i can create something where I will create an alert which will check inside search 1, if the result count is zero or not,if it is zero I should be alerted and then only it should also internally run a report (search 2) to check there if a data is missing or not. If not then move to search 3 to check further, if something is missing then we need to be alerted that data is missing with search 2 or search 3 .It will help me in diagnosing the situation early and also not all the time unnecessary alerts are running on my search head. Any ideas would be appreciated, I am okay using alerts,report,dashboards,scripts etc. Again, thanks in advance.