All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All ,    So i was trying to create an global field for a newly indexed data , so trying out with automatic lookup settings . Ex- in the indexed data - datacenter name is not mentioned , so wan... See more...
Hi All ,    So i was trying to create an global field for a newly indexed data , so trying out with automatic lookup settings . Ex- in the indexed data - datacenter name is not mentioned , so wanted to populate it using automatic lookup . I am able to do that , but for only 1 sourcetype , i have 100+ sourcetypes , is there any way to define apply to - sourcetype/hosts to multiple one . Please let me know .
Hello everyone! I have time in such format 2022-09-02T18:44:15, this time in GMT+3, and I need to change convert this time to UTC. Can you help me? 
Splunk Connect for Zoom stopped working after Zoom enforced use of ssl certificates on 2022/07/20 After support tickets with Zoom and Splunk here are some experience would like to share. Using sign... See more...
Splunk Connect for Zoom stopped working after Zoom enforced use of ssl certificates on 2022/07/20 After support tickets with Zoom and Splunk here are some experience would like to share. Using signed ssl certificates private or internal CA did not work. It seems that I had to use a certificate signed a commercial CA like Entrust. If you want to chain your ssl certificate with Entrust root and intermediate certificates, please ensure that the certificates are in the order as follows after running the command: openssl crl2pkcs7 -nocrl -certfile yoursslcertificate.entrust.pem | openssl pkcs7 -print_certs -noout Or you could just include the commercially issued ssl certificate without the root and intermediate certificates.       subject=/C=US/ST=STATE/L=CITY/O=ORG, Inc./CN=mycompany.com issuer=/C=US/O=Entrust, Inc./OU=See www.entrust.net/legal-terms/OU=(c) 2012 Entrust, Inc. - for authorized use only/CN=Entrust Certification Authority - L1K subject=/C=US/O=Entrust, Inc./OU=See www.entrust.net/legal-terms/OU=(c) 2012 Entrust, Inc. - for authorized use only/CN=Entrust Certification Authority - L1K issuer=/C=US/O=Entrust, Inc./OU=See www.entrust.net/legal-terms/OU=(c) 2009 Entrust, Inc. - for authorized use only/CN=Entrust Root Certification Authority - G2 subject=/C=US/O=Entrust, Inc./OU=See www.entrust.net/legal-terms/OU=(c) 2009 Entrust, Inc. - for authorized use only/CN=Entrust Root Certification Authority - G2 issuer=/C=US/O=Entrust, Inc./OU=See www.entrust.net/legal-terms/OU=(c) 2009 Entrust, Inc. - for authorized use only/CN=Entrust Root Certification Authority - G2       If all works after restarting Splunk, running the netstat -nap |grep 9997 will show the following connections from Zoom ip addresses and you would see logs under the sourcetype=zoom:webhook       tcp 0 0 0.0.0.0:4443 0.0.0.0:* LISTEN 25849/python3.7 tcp 0 0 10.#.#.#:4443 3.235.82.171:41101 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.171:58497 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.171:54514 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.172:48513 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.171:53006 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.172:55259 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.172:46028 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.172:52837 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.172:7527 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.171:12934 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.83.101:32088 TIME_WAIT -          
Hello all, how is possible to change default dump folder on Windows?
I had so much trouble with this but figured I would share what I did to make it work for me. You may have other ways of doing it but I found very little guidance online to help someone going through ... See more...
I had so much trouble with this but figured I would share what I did to make it work for me. You may have other ways of doing it but I found very little guidance online to help someone going through the process. If you have done other things that worked for you, feel free to reply and share.
Hello all, a Splunk newbie here. For the company that I work for we want to monitor some licenses that are being used. The logs show the user the type of license that they have. The type can be for... See more...
Hello all, a Splunk newbie here. For the company that I work for we want to monitor some licenses that are being used. The logs show the user the type of license that they have. The type can be for the most part IN (not using) or OUT (using the license) and sometimes DENIED but that is not of interest currently. Because sometimes users forget to log off we want to take this into account by looking at the data over the past 2 weeks. I count the most recent type for each user and focus if the type is OUT. Because this means that the user is using a license. This gives a count of OUT over the past 2 weeks, which is pretty accurate with what the license manager shows. This count of OUT over the past 2 weeks is needed to be shown every 5 minutes on a (time)chart. So, is it possible to have a (time)chart that runs a count over the past 2 weeks every 5 minutes? For the query I have: base search | dedup 1 user sortby -_time | table user type _time | search type=out This gives me only the users that have a type OUT, which means these are the ones that are using a license. Again, I would like to count the number of OUTS these past 2 weeks and have that number calculated every 5 minutes and shown on a (time)chart. I have tried loads of stuff (from other posts) but I did not manage to get it to work. There already is a workaround where we use an ETL tool with the Splunk API as middleware, but I thought there should be a more efficient way to do it. If any more info is needed I (hopefully) can provide that, Thanks in advance, M.
Can we configure input.conf define port with multiple sourcetype? For ex. [tcp://6134] index = top sourcetype = mac_log sourcetype= tac_log disabled = 0 Or  Is there any way to segregate... See more...
Can we configure input.conf define port with multiple sourcetype? For ex. [tcp://6134] index = top sourcetype = mac_log sourcetype= tac_log disabled = 0 Or  Is there any way to segregate logs coming in one port with different sourcetypes?
Hi All, we are using Netapp cloud secure add on to collect data from cloud secure data and we have configured input but not getting all data below is the configuration please suggest if anything t... See more...
Hi All, we are using Netapp cloud secure add on to collect data from cloud secure data and we have configured input but not getting all data below is the configuration please suggest if anything to add in configuration.   [cloud_secure_alerts://******] builtin_system_checkpoint_storage_type = auto entityaccessedtime = 1635795607850 index = main interval = 60 netapp_secure_insight_fqdn = ********.cloudinsights.netapp.com sourcetype = netapp:cloud_secure:alerts        
I am trying to configure NEAP policy action rules  to integrate servicenow incident comments by passing a token, but it looks Splunk doesn't support tokens in NEAP action rules. I heard there is some... See more...
I am trying to configure NEAP policy action rules  to integrate servicenow incident comments by passing a token, but it looks Splunk doesn't support tokens in NEAP action rules. I heard there is some custom script would pass the tokens, does anybody have idea on this customization part and how we can achieve it ?
We have enabled Bidirectional correlation search for Service now in our ITSI, unfortunately  itsi_notable_event_external_ticket  lookup is not updating proper values. I couldn't find the saved search... See more...
We have enabled Bidirectional correlation search for Service now in our ITSI, unfortunately  itsi_notable_event_external_ticket  lookup is not updating proper values. I couldn't find the saved search which is used to update the lookup to troubleshoot further. Can some one tell me how itsi_notable_event_external_ticket lookup is being updated ?
I have borrowed a search from an earlier question to help give kWh information on a given month. How can I modify the search to show only the host_name and the sum total of the avg_kWh column? inde... See more...
I have borrowed a search from an earlier question to help give kWh information on a given month. How can I modify the search to show only the host_name and the sum total of the avg_kWh column? index=network sourcetype=zabbix metric_name="st4InputCordActivePower" host_name="pdu02.LON5.Contoso.com" | bin _time span=1h | stats count as samples sum(value) as watt_sum by _time | eval kW_Sum=watt_sum/1000 | eval avg_kWh=kW_Sum/samples | addcoltotals   2022-05-30 18:00 12 44335.0 3.69458 44.3350 ....         2022-05-31 23:00 12 43489.0 3.62408 43.4890   7686 27425688.0 2595.96346 27425.6880   
Hi all, I'm hoping that someone can help / point me in the right direction. I have two events which are being fed into Splunk, one being a raise of an event flag, the other being the removal of the... See more...
Hi all, I'm hoping that someone can help / point me in the right direction. I have two events which are being fed into Splunk, one being a raise of an event flag, the other being the removal of the event flag. Raising Sep 2 10:32:45 SOFTWARE CEF:0|SOFTWARE|CLIENT|42|Agent Log Event|Agent Log Event|high|id=123 shost=Management start=2022-09-02 10:32:42 cs1Label=Affected Agents cs1=[SERVERNAME] (ip: None, component_id: ID) msg='AgentMissing' status flag was raised Removal Sep 2 10:34:33 SOFTWARE CEF:0|SOFTWARE|CLIENT|42|Agent Log Event|Agent Log Event|high|id=123 shost=Management start=2022-09-02 10:34:33 cs1Label=Affected Agents cs1=[SERVERNAME] (ip: None, component_id: ID) msg='AgentMissing' status flag was removed After some browsing online & through the Splunk support pages I have been able to put together the following query:     (index=[INDEX] *agentmissing*) ("msg='AgentMissing' status flag was raised" OR "msg='AgentMissing' status flag was removed") | rex field=_raw ".*\)\s+(?<status>.*)" | stats latest(_time) as flag_finish by connection_type | join connection_type [ search index=[INDEX] ("msg='AgentMissing' status flag was raised") connection_type=* | stats min(_time) as flag_start by connection_type] | eval difference=flag_finish-flag_start | eval flag_start=strftime(flag_start, "%Y-%m-%d %H:%M") | eval flag_finish=strftime(flag_finish, "%Y-%m-%d %H:%M") | eval difference=strftime(difference,"%H:%M:%S") | table connection_type, flag_start, flag_finish, difference | rename connection_type as Hostname, flag_start as "Flag Raised Time", flag_finish as "Flag End Time", difference as "Total Time" | sort - difference     The above is working, however as I am using the "stats latest" command it is only showing the latest occurrence of the event. However, I would like to display the time between these events for multiple occurrences. So as an example of the above, it was between 7:47 & 9:31, I would also like to see flags for other time occurrences. TIA!
I have to decrease the fields names font size, like subgroup, platforms, bkcname etc.. (all fields present in the table) & make the count bold which is present in table.   But i want change only in... See more...
I have to decrease the fields names font size, like subgroup, platforms, bkcname etc.. (all fields present in the table) & make the count bold which is present in table.   But i want change only in one particular table, not all the tables presents in the dashboard. <row> <panel> <title>Platform wise Automation Status Summary</title> <table> <search> <query>index=network_a I want change the above table(Platform wise Automation Status Summary) Any help would be greatly appreciated!!
Hi, I'm trying to extract some fields from my Access Point Aruba in order to be CIM compliant. For authentication log I have two kinds of event:   Login failed: cli[5405]: <341004> <WARN> AP:... See more...
Hi, I'm trying to extract some fields from my Access Point Aruba in order to be CIM compliant. For authentication log I have two kinds of event:   Login failed: cli[5405]: <341004> <WARN> AP:ML_AP01 <................................>  Client 60:f2:62:8c:a8:a7 authenticate fail because RADIUS server authentication failure Login success: stm[5434]: <501093> <NOTI> AP:ML_AP01 <..................................> Auth success: 60:f2:62:8c:a8:a7: AP ...................................ML_AP01   My goal is to extract the mac address after "Client" in the first log and the mac after "Auth success" in the second one in a common field called "src", can someone please help me? Thanks in advance!
Hi all, i have the json data as below.   { "Info": { "Unit": "ABC", "Project": "XYZ", "Analysis Summary": { "DB 1":{"available": "1088kB","use... See more...
Hi all, i have the json data as below.   { "Info": { "Unit": "ABC", "Project": "XYZ", "Analysis Summary": { "DB 1":{"available": "1088kB","used": "172.8kB","used%": "15.88%","status":"OK"}, "DB2 2":{"available": "4096KB","used": "1582.07kB","used%": "38.62%","status":"OK"}, "DB3 3":{"available": "128KB","used": "0","used%": "0%","status":"OK"}, "DB4 4":{"available": "16500KB","used": "6696.0KB","used%": "40.58%","status":"OK"}, "DB5 5":{"available": "22000KB","used": "9800.0KB","used%": "44.55%","status":"OK"} } }}   I want to create a table like this   Database available used used% status DB1 4096KB 1582.07kB 38.62% OK DB2 1088kB 172.8kB 15.88% OK DB3 16500KB 6696.0KB 40.58% OK DB4 22000KB 9800.0KB 44.55% OK DB5 128KB 0 0% OK   I know how to extract the data but i am not able to put data in this format in table. Anyone have idea on this?
Hi, I have installed the Splunk forwarder in AIX server and successfully see the server level results(CPU ,DF ,Memory) in the dashboard, But am planning to install the Splunk add on for WebSphere pr... See more...
Hi, I have installed the Splunk forwarder in AIX server and successfully see the server level results(CPU ,DF ,Memory) in the dashboard, But am planning to install the Splunk add on for WebSphere process server 7.0 version, May I know "Splunk Add on WebSphere application server" will for the older version  of WebSphere process server? Your inputs will be appreciated .
Hello Splunk Enjoyers! I have problem Information about routers arrives every minute, so  What I have:  name_of_router and serial_number of client on index = routers What i want: i want to make... See more...
Hello Splunk Enjoyers! I have problem Information about routers arrives every minute, so  What I have:  name_of_router and serial_number of client on index = routers What i want: i want to make alert, if serial_number has changed.  How should i do this? @splunk     
Hi all,  I wish to generate login times for a list of users which are specified in a lookup table titled user_list.csv. The column header of the list of users in this list is called "IDENTITY". C... See more...
Hi all,  I wish to generate login times for a list of users which are specified in a lookup table titled user_list.csv. The column header of the list of users in this list is called "IDENTITY". Currently, I have an index that on its own without inserting the lookup table, already has a field called "Identity". This index itself gives me any users' login times within the specified timeframe as long as I specify Identity="*". Without specifying Identity="*" or any other user's names, the events will not populate. What I am trying to do is to input a specified list of users and be able to check their login times. However when I use the following search query, I end up getting 0 events:   index=logintime  [|inputlookup user_list.csv |fields IDENTITY |format] IDENTITY="*" | table _time, eventType, ComputerName, IDENTITY   I have already checked that the lookup table is within the same app. Please help, thank you.
Hi, I have a metric with 1 dimension containing an integer value. I need to apply some calculation to the metric based on the dimension value. The formula to apply to each Data Point would be s... See more...
Hi, I have a metric with 1 dimension containing an integer value. I need to apply some calculation to the metric based on the dimension value. The formula to apply to each Data Point would be sth like this:     metric_value*100/dimensionA_value   I have seen dimensions extensively used as filters but I was not able to find a way to reference the dimension value so that I can use it in a calculation like the one above.   Any idea, how could I accomplish that?   Thanks in advance Cesar
I am getting  "The search job terminated unexpectedly" in the dashboard. In search, the index is working fine.  And this happens in one dashboard only other dashboards are working fine. Anoth... See more...
I am getting  "The search job terminated unexpectedly" in the dashboard. In search, the index is working fine.  And this happens in one dashboard only other dashboards are working fine. Another Dashboard I don't know what the reason for this issue is. Please anyone help me. Thanks in Advance