All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, Do we have any specific add-on and documentation to integrate the LogMeIn logs into Splunk Cloud. If yes kindly let me know. https://www.logmein.com/central https://www.logmein.com/p... See more...
Hi Team, Do we have any specific add-on and documentation to integrate the LogMeIn logs into Splunk Cloud. If yes kindly let me know. https://www.logmein.com/central https://www.logmein.com/pro https://www.lastpass.com/
Hello, I have following panel in my dashboard, which basically <panel id="Averages"> <table> <title>Average Startup Duration for $host$</title> <search> <... See more...
Hello, I have following panel in my dashboard, which basically <panel id="Averages"> <table> <title>Average Startup Duration for $host$</title> <search> <query>|inputcsv StartupMinMaxAvg.txt | where host = "$host$" | rename avg_total as "1). Total Startup Time" avg_logger as "2). ---- Logger" avg_pm as "3). ---- Physical Memory Init" avg_transmgmt as "4). ---- Trans Management" avg_rowstore as "5). ---- Rowstore Load" avg_cs_load as "6). ---- Column Store Load" | table "1). Total Startup Time" "2). ---- Logger" "3). ---- Physical Memory Init" "4). ---- Trans Management" "5). ---- Rowstore Load" "6). ---- Column Store Load" | transpose | rename "column" as "Startup Phase", "row 1" as "Duration"</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> Now, I do not really like the table-viz look itself. It just does not fit to the rest of the panels. Is it possible to visualize it as a html table, let us say with 25% width, without visible table lines, with bold title? Or the html tables have to be static and do not accept the output from the search ... Kind Regards, Kamil
Hello, We are planning to deploy a standalone machine agent  and database agent (v4.5.17) but the memory usage is too high: Machine agent: 1GB Database agent: 620MB We are trying to fine-tu... See more...
Hello, We are planning to deploy a standalone machine agent  and database agent (v4.5.17) but the memory usage is too high: Machine agent: 1GB Database agent: 620MB We are trying to fine-tune the java options in the vmoptions files. In order to reduce the memory footprint with these values: -XX:MaxPermSize=10m ==> 20MB seems to be the default -Xmx32m ==> 256 MB to be the default -Xms2m -Xss32m With these settings, the Machine agent seems stable with 250MB and db agent with 150MB. Has anyone done this on production servers? Regards, Thierry QUESSADA
Hello, This is my query | loadjob savedsearch="myquery" | where strftime(_time, "%Y-%m-%d") >= "2020-02-26" | stats dc(eval(if((STEP=="show"),ID_RF_ATOS,NULL))) AS show, dc(eval(if((STEP=="c... See more...
Hello, This is my query | loadjob savedsearch="myquery" | where strftime(_time, "%Y-%m-%d") >= "2020-02-26" | stats dc(eval(if((STEP=="show"),ID_RF_ATOS,NULL))) AS show, dc(eval(if((STEP=="clic"),ID_RF_ATOS,NULL))) AS clic, dc(eval(if((STEP=="send"),ID_RF_ATOS,NULL))) AS send by company,city | where show>0 | stats sum(show) AS show,sum(clic) AS clic,sum(send) AS send by city | eval rate= round(((show - (clic+send))/show*100),2)." %" | table city,show,clic,send,rate I want to calculate the rate by city and add it to the table
I am new to Splunk and still learning.. I have more than 100 queries to run when asked during a daily activity and its a pain to copy and do a paste each and every time asked to run by the team fo... See more...
I am new to Splunk and still learning.. I have more than 100 queries to run when asked during a daily activity and its a pain to copy and do a paste each and every time asked to run by the team for some kind of validation.. Is there any way I can simply run them through excel like a click on query [ by making it as link ] and it simply deploy splunk in browser and run the query? Or any other option to serve the purpose ? any help would be appreciated.. Thanks...
Hi, on Splunkbase it says it is compatible with version 7.3 and lower. Is it safe to install it on 8.0.1 or not? I am asking because I had experience of incompatible apps bringing whole Splunk do... See more...
Hi, on Splunkbase it says it is compatible with version 7.3 and lower. Is it safe to install it on 8.0.1 or not? I am asking because I had experience of incompatible apps bringing whole Splunk down.
Hi Experts, In my NFS server, Splunk UF is installed. That NFS server is basically a log storage server, log rotation daemon also running on that server that convert file to gzip file after 24 ... See more...
Hi Experts, In my NFS server, Splunk UF is installed. That NFS server is basically a log storage server, log rotation daemon also running on that server that convert file to gzip file after 24 hours, in same location. NFS server is a single server, and it have really big amount of data. But some time my UF don't forward some files data from NFS server to my Indexers server. Many files remain missing in my Splunk indexers. Following parameters are same for many of the sourcetype in props.conf ( Yes, many events are really big) TRUNCATE = 20000 MAX_EVENTS = 512 BREAK_ONLY_BEFORE = < [Set] > Please suggest how I improve my UF performance.
............. | rex field=user mode=sed "s/./ /g" | eval user=lower(user) | eval date_hour=strftime(_time, "%H")| search date_hour>=4 date_hour<=23 | convert timeformat="%a %B %d %Y... See more...
............. | rex field=user mode=sed "s/./ /g" | eval user=lower(user) | eval date_hour=strftime(_time, "%H")| search date_hour>=4 date_hour<=23 | convert timeformat="%a %B %d %Y" ctime(_time) AS Date | streamstats earliest(_time) AS login, latest(_time) AS logout by Date, user | eval session_duration=logout-login | eval h=floor(session_duration/3600) | eval m=floor((session_duration-(h*3600))/60) | eval SessionDuration=h."h ".m."m " | convert timeformat=" %m/%d/%y - %I:%M %P" ctime(login) AS login | convert timeformat=" %m/%d/%y - %I:%M %P" ctime(logout) AS logout | stats count AS auth_event_count, earliest(login) as login, max(SessionDuration) as session_duration, latest(logout) as logout, values(Logon_Type) AS logon_types by Date, user | sort + user
I am not receiving this log last 10 days from one host. I want to display 2 hosts one host showing up other host showing down based on system.system_up_time metric log name 2 hosts host one rece... See more...
I am not receiving this log last 10 days from one host. I want to display 2 hosts one host showing up other host showing down based on system.system_up_time metric log name 2 hosts host one receiving System.System_Up_Time metric log host two not receiving System.System_Up_Time past 7 days  
i have been install the splunk server and universal forwarders in different centos machine and the splunk server to create the hec token. that token will be using to create the metrics like(splunk ap... See more...
i have been install the splunk server and universal forwarders in different centos machine and the splunk server to create the hec token. that token will be using to create the metrics like(splunk app for infrastructure )but i can't able to see the metrics data in dashboard but my splunk forwarder logs are sending to the server.
Query : index=systemdetails source=sytemdetails* Condition = 0 | eval [ search index=systemdetails source=sytemdetails* Condition != 0 | head 1 | eval EL = "1584081083.114 ... See more...
Query : index=systemdetails source=sytemdetails* Condition = 0 | eval [ search index=systemdetails source=sytemdetails* Condition != 0 | head 1 | eval EL = "1584081083.114 ABC-12345 , 1584081089.114 DEF-678910" | makemv delim="," EL | fields EL | return EL ] | eval Final_EL = split(EL,",") | eval ET = mvindex(split(Final_EL," "),0) | eval EMN = mvindex(split(Final_EL," "),1) Am successfully able to generate “Final_EL” multivalue field for each event. Final_EL = 1584081083.114 ABC-12345 Final_EL = 1584081089.114 DEF-678910 Requirement : Each event should have the multiple value fields(ET and EMN) ET= 1584081083.114 ET = 1584081089.114 EMN = ABC-12345 EMN = DEF-678910 Tried using both the below ways , but both doesnt work | rex max_match=0 field=Final_EL "(?((.*?),){0,})" | eval ET = mvindex(split(Final_EL," "),0) Kindly help.
Hello everyone! I have a static lookup which has two fields/columns State and tag. Default value of State is "Enabled" for all the tags as mentioned below. tag State platfor... See more...
Hello everyone! I have a static lookup which has two fields/columns State and tag. Default value of State is "Enabled" for all the tags as mentioned below. tag State platformA Enabled platformB Enabled platformC Enabled platformD Enabled I want to give the capability to user to alter the State of tag to "Disabled" as and when required. So basically they would be altering this static lookup and dump the modified result to the same lookup as mentioned below. tag State platformA Disabled platformB Enabled platformC Enabled platformD Enabled I am able to replace the State in my search but if I do append after outputlookup then it appends a new record and if I don't do append then it deletes all other existing records and keep only the newly replaced record. I have tried with below two queries: Query 1- | inputlookup tags.csv where tag=platformA | replace "Disabled" with "Enabled" in State | outputlookup append=t tags.csv Query 2- | inputlookup tags.csv where tag=platformA | replace "Disabled" with "Enabled" in State | outputlookup tags.csv Am I missing anything? Or is there a better approach to this? Thanks in advance.
Hi All, Is there any recent test,conf discussion or doc around mentioned below splunk blog 2016: https://www.splunk.com/en_us/blog/tips-and-tricks/universal-or-heavy-that-is-the-question.html ... See more...
Hi All, Is there any recent test,conf discussion or doc around mentioned below splunk blog 2016: https://www.splunk.com/en_us/blog/tips-and-tricks/universal-or-heavy-that-is-the-question.html Is it still 6 times lower with UF?
Hi I have configured the Event Hub inputs as per the Microsoft Azure Add-on for Splunk documentation to fetch the data from Azure port into Splunk. Below are the input and Configuration details pr... See more...
Hi I have configured the Event Hub inputs as per the Microsoft Azure Add-on for Splunk documentation to fetch the data from Azure port into Splunk. Below are the input and Configuration details provide in the Splunk Add-on Configuration --> Account --> Account name * --> Azure_test Client ID * --> dXXXXXa-07xxx-4xx0-9d4d-273XXXXXXX --> Taken from the Azure Application ID Client Secret * --> xxxxxxxxxxxxxx ---> Taken from the Azure Application Secret Key Configuration --> Proxy --> No Proxy configured Configuration --> Logging --> INFO --> Save Inputs [azure_event_hub://Azure_EventHub_Test] connection_string = ******** consumer_group = $Default event_hub_name = insights-operational-logs event_hub_timeout = 5 index = microsoft_azure interval = 30 max_batch_set_iterations = 100 max_batch_size = 100 number_of_threads = 4 source_type = azure:eventhub sourcetype = azure:eventhub In Splunk internal logs, I could see this errors index=_internal sourcetype="ta:ms:aad:log" source=*hub* Error Details: 2020-03-13 08:58:59,391 ERROR pid=19115 tid=MainThread file=base_modinput.py:log_error:307 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/modinput_wrapper/base_modinput.py", line 127, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure_event_hub.py", line 92, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/input_module_azure_event_hub.py", line 111, in collect_events client = EventHubClient.from_connection_string(connection_string, event_hub_path=event_hub_name, http_proxy=HTTP_PROXY) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure/eventhub/client_abstract.py", line 272, in from_connection_string address, policy, key, entity = _parse_conn_str(conn_str) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure/eventhub/client_abstract.py", line 53, in _parse_conn_str raise ValueError("Invalid connection string") ValueError: Invalid connection string INFO details 2020-03-13 08:58:59,391 INFO pid=19115 tid=MainThread file=setup_util.py:log_info:114 | Proxy is not enabled! 2020-03-13 08:58:58,216 INFO pid=19115 tid=MainThread file=connectionpool.py:_new_conn:758 | Starting new HTTPS connection (1): 127.0.0.1 I am using Azure Free subscription, using Splunk 8.0.2 Enterprise Trail version installed in the Google Cloud VM ware instance with Unbuntu 18.4 LTS with latest version of Python 3.7.5 version configured as default. Kindly guide me how to fix this issue and also correct me if any of the inputs details should be changed.
Hi Ninjas, I have a radio button with two values as STARTING job and RUNNING jobs. I have different query for each value. I would like to the corresponding value search should get filled it whe... See more...
Hi Ninjas, I have a radio button with two values as STARTING job and RUNNING jobs. I have different query for each value. I would like to the corresponding value search should get filled it when user select the Status. Following are the queries: Starting Jobs Query: ndex=infra_apps sourcetype=XXXX EventCode=40245 Status=Running AppID=$appid$ Machine=$host$ Job=$job$ | dedup _raw | lookup datalakenodeslist.csv host as Machine OUTPUT cluster | search cluster=$clustername$ | timechart count span=2m Running Jobs Query: index=infra_apps sourcetype=ca:atsys:edemon:txt EventCode=40245 AND (Status=STARTING OR Status=Running) AppID=$appid$ Machine=$host$ Job=$job$ | dedup _raw | lookup datalakenodeslist.csv host as Machine OUTPUT cluster | search cluster=$clustername$ | eval starting=if(Status="STARTING","1","0"),status=if(Status="STARTING","start","stop"), time=_time+status | bin span=2m _time | stats max(starting) as starting, earliest(time) as first, latest(time) as last by Job,_time | xyseries _time Job starting first last | makecontinuous span=2m _time | streamstats window=2 global=f earliest(last*) as last* | reverse | streamstats window=2 global=f earliest(first*) as first* | reverse | foreach starting* [ eval < >=if(isnull('< >') AND like('first< >',"%start"),"0",if(isnull('< >') AND like('first< >',"%stop"),"1",if(isnull('< >') AND like('last< >',"%start"),"1",if(isnull('< >') AND like('last< >',"%stop"),"0",'< >'))))] | fields - first*, last* | filldown * | reverse | filldown * | reverse | addtotals fieldname=Starting | fields _time,Starting PS: the token i am using is Status and the token value is jobstatus Can you please help @vnravikumar @woodcock @sideview
Hello, We'd like to create a dashboard for our vulnerability data. Our two main goals are: 1. Track the number of vulnerabilities over selected time (ex.: 30 days, 45 days, etc.) 2. Show the c... See more...
Hello, We'd like to create a dashboard for our vulnerability data. Our two main goals are: 1. Track the number of vulnerabilities over selected time (ex.: 30 days, 45 days, etc.) 2. Show the current "vulnerability state" of an asset or group of assets (the data of the last performed scan). There are no surprises for the first part, but for the second one we can’t find a way to show the data of the last scan only. Our scans has a different frequency for different assets, so we can’t just select a time period equal to the scan frequency. We tried to use the value of the last_discovered field and then use max function in order to track the last scan like below: | streamstats max(last_discovered) as last_scan | where last_discovered=last_scan But it does not work because a scan can take a few seconds, so the last_discovered field’s values are not the same. Do you have any ideas how we can grab the date of the last scan only? Thanks.
The idea is to show up top 3 CPU Averages in a day for last 7 days. Query Using:- index=os sourcetype=ps host="Host1" | timechart span=1h avg(pctCPU) as Avg_pctCPU Here, I want to first ... See more...
The idea is to show up top 3 CPU Averages in a day for last 7 days. Query Using:- index=os sourcetype=ps host="Host1" | timechart span=1h avg(pctCPU) as Avg_pctCPU Here, I want to first sort the result and then using the limit command filter only top 3 results with maximum value for each day and then if i run the search for last 7 days then it should do the same thing and should give me the 21 results, Top 3 results each day * 7 days. == Total 21 results Thanks in advance
I am running below Query | makeresults| eval data="Brand1,File1,123;Brand1,File2,456;Brand2,File1,789;Brand2,File2,124;Brand3,File1,125;Brand3,File2,786"| makemv data delim=";" | rex field=data m... See more...
I am running below Query | makeresults| eval data="Brand1,File1,123;Brand1,File2,456;Brand2,File1,789;Brand2,File2,124;Brand3,File1,125;Brand3,File2,786"| makemv data delim=";" | rex field=data max_match=0 "(?<Brand>\w+\d+),(?<Files>\w+\d+)\,(?<Size>\d+)" | fields - _time,data | table Brand,Size,Files| chart values(Size) over Files by Brand And want result in below format Files Brand1 Brand2 Brand3 File1 123 789 125 File2 456 124 786 But result is coming as attached in picture. Whats wrong with the Query ?
We noticed that one of the sourcetype "wms_oracle_sessions" is missing. when we search the following queries, no results found. index=main sourcetype=wms_oracle_sessions sourcetype=wms_orac... See more...
We noticed that one of the sourcetype "wms_oracle_sessions" is missing. when we search the following queries, no results found. index=main sourcetype=wms_oracle_sessions sourcetype=wms_oracle_sessions due to which the following query is not displaying any events. No results found. index=main sourcetype=wms_oracle_sessions | bucket span=5m _time | stats count AS sessions by _time,warehouse,machine,program | search warehouse=wk | stats sum(sessions) AS psessions by _time,program | timechart avg(psessions) by program How can we proceed further to get this work? Can we recreate the sourcetype? If we recreate the sourcetype, will the data be displayed?
Hi, Following query displays user logon events for the last 10 days. We need user logon events for the last 12 months. How can this be achieved. index=main sourcetype=WinEventLog (EventCode=462... See more...
Hi, Following query displays user logon events for the last 10 days. We need user logon events for the last 12 months. How can this be achieved. index=main sourcetype=WinEventLog (EventCode=4624 OR EventCode=4634) user=pratapa.ln | eval day=strftime(_time,"%d/%m/%Y") | stats earliest(_time) AS earliest latest(_time) AS latest by user host day | eval earliest=strftime(earliest,"%d/%m/%Y %H.%M.%S"), latest=strftime(latest,"%d/%m/%Y %H.%M.%S")