All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Has anyone sent logs from BMC AMI Defender to Splunk I would like to know Thanks
I have been using Microsoft Azure add-on for Splunk to ingest Azure sign in logs for over a year and today I see the following error:     021-11-08 16:31:12,318 ERROR pid=4614 tid=MainThread fi... See more...
I have been using Microsoft Azure add-on for Splunk to ingest Azure sign in logs for over a year and today I see the following error:     021-11-08 16:31:12,318 ERROR pid=4614 tid=MainThread file=base_modinput.py:log_error:309 | Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/splunklib/binding.py", line 1262, in request raise HTTPError(response) splunklib.binding.HTTPError: HTTP 500 Internal Server Error -- b'{"messages":[{"type":"ERROR","text":"Unexpected error \\"<class \'splunktaucclib.rest_handler.error.RestError\'>\\" from python handler: \\"REST Error [400]: Bad Request -- HTTP 400 Bad Request -- int() argument must be a string, a bytes-like object or a number, not \'NoneType\'\\". See splunkd.log for more details."}]}'       Has anyone else experiencing this issue?
We have a relatively small set of devices that emit daily in the vicinity of a million events each.  Each device has unique ID (Serial #) which is included in events. What would be an efficient meth... See more...
We have a relatively small set of devices that emit daily in the vicinity of a million events each.  Each device has unique ID (Serial #) which is included in events. What would be an efficient method of collecting a list of unique IDs?  index=abc | stats count by ID   index=abc | stats values(id) as IDs | mvexpand IDs index-abc | fields ID | dedup ID Anything else?  
I am trying to find out if anyone has done this by now SMF 110 SMF 100-102 SMF 120 SMF 115-116 SMF 92 SMF 70-79 SMF 80 and SMF 30 SMF 118-119 and others into sp
My event returns the following: 1@test.com/test/2_0" xmlns:d4p1="http://www.w3.org/1999/xlink"> <eb:Description xml:lang="en">test document ref 000000000000.rtf but I want to strip the following st... See more...
My event returns the following: 1@test.com/test/2_0" xmlns:d4p1="http://www.w3.org/1999/xlink"> <eb:Description xml:lang="en">test document ref 000000000000.rtf but I want to strip the following string and replace with a colon space @test.com/test/2_0" xmlns:d4p1="http://www.w3.org/1999/xlink"> <eb:Description xml:lang="en"> Ideal Result 1: test document ref 000000000000.rtf Is there a way of doing this?
Hi,   I have the bellow search which works out the successes, failures, success_rate, failure_rate and total however I would like to add a field to work out the amount of minutes the failure rate i... See more...
Hi,   I have the bellow search which works out the successes, failures, success_rate, failure_rate and total however I would like to add a field to work out the amount of minutes the failure rate is above a certain threshold for example 20% failure rate however unsure how to do that: index="main" source="C:\\inetpub\\logs\\LogFiles\\*" |eval Time = (time_taken/1000)|eval status=case(Time>20,"TimeOut",(sc_status!=200),"HTTP_Error",true(),"Success")|stats sum(Time) as sum_sec,max(Time) as max_sec,count by status,sc_status,host,_time|chart sum(count) by host,status| addcoltotals labelfield=host label="(TOTAL)"| addtotals fieldname=total|eval successes=(total-(timeout+HTTP_Error))|eval failures=(TimeOut+HTTP_Error)|eval success_rate=round((successes/total)*100,2)|eval failure_rate=round((failures/total)*100,2)|table successes failures success_rate failure_rate total   Any help would be greatly appreciated.   Thanks,   Joe
Hi,   I have the bellow search which works out the successes, failures, success_rate, failure_rate and total however I would like to add a field to work out the amount of minutes the failure rate i... See more...
Hi,   I have the bellow search which works out the successes, failures, success_rate, failure_rate and total however I would like to add a field to work out the amount of minutes the failure rate is above a certain threshold for example 20% failure rate however unsure how to do that: index="main" source="C:\\inetpub\\logs\\LogFiles\\*" |eval Time = (time_taken/1000)|eval status=case(Time>20,"TimeOut",(sc_status!=200),"HTTP_Error",true(),"Success")|stats sum(Time) as sum_sec,max(Time) as max_sec,count by status,sc_status,host,_time|chart sum(count) by host,status| addcoltotals labelfield=host label="(TOTAL)"| addtotals fieldname=total|eval successes=(total-(timeout+HTTP_Error))|eval failures=(TimeOut+HTTP_Error)|eval success_rate=round((successes/total)*100,2)|eval failure_rate=round((failures/total)*100,2)|table successes failures success_rate failure_rate total   Any help would be greatly appreciated.   Thanks,   Joe
Hello!  I have a lookup table that looks like the following:  host timestamp host1 10:33 host2 4:24   What I would like to do is "iterate" through the lookup table using the host f... See more...
Hello!  I have a lookup table that looks like the following:  host timestamp host1 10:33 host2 4:24   What I would like to do is "iterate" through the lookup table using the host field for host, and the timestamp for the search. Does anyone have any opinions/thoughts? 
What happened to the ES Sandbox? I can no longer find it to sign up for it.
Hi, I am using the Splunk API 8.1.1 and using a script to UPDATE an existing dashboard. When i try to send these characters ; & > < i got an error 400 bad request saying "...is not supported by this... See more...
Hi, I am using the Splunk API 8.1.1 and using a script to UPDATE an existing dashboard. When i try to send these characters ; & > < i got an error 400 bad request saying "...is not supported by this handler". curl --location --request POST 'https://localhost:8089/servicesNS/nobody/myapp/data/ui/views/mydashboard' \ --header 'Authorization: Basic YWRtaW46cGFzc3dvcmQ=' \ --header 'Content-Type: text/html' \ --data-raw 'eai:data=<dashboard><label>My Report</label><description>My characters are: ; & > < and more...</description></dashboard>' How can I send these characters so I can display them on my dashboard? Thanks in advance!
Hello, We are seeing the below error after our linux upgrade, Could someone please help us fix this issue? Unable to initialize modular input "server" defined in the app "splunk_app_db_connect": In... See more...
Hello, We are seeing the below error after our linux upgrade, Could someone please help us fix this issue? Unable to initialize modular input "server" defined in the app "splunk_app_db_connect": Introspecting scheme=server: script running failed (exited with code 127). When i saw this issue last time, i have set the JAVA_HOME path in splunk-launch.conf file, it worked then but we are seeing this error again  Please help me out.   Thanks
Hello Splunk Community  I have managed to use REST to add some columns from my CSV files. However, not all the columns are uploaded. I am using the below script:  | rest /servicesNS/-/-/data/lookup... See more...
Hello Splunk Community  I have managed to use REST to add some columns from my CSV files. However, not all the columns are uploaded. I am using the below script:  | rest /servicesNS/-/-/data/lookup-table-files search="*_Weather.Lookups.csv" But the issue is that I am not getting all the fields associated with *_Weather.Lookups.csv anybody out there  know how I can het all the fields or a specific field from the lookup ? Thanks in advance. 
Hello, I needs efforts in Hrs for event correlation to be implemented for a general application considering data from 5 to 6 sources, such as Appdynamics, OEM, splunk itself and others. And are the... See more...
Hello, I needs efforts in Hrs for event correlation to be implemented for a general application considering data from 5 to 6 sources, such as Appdynamics, OEM, splunk itself and others. And are there any pre-requisite to be considered for event correlation. What are main things to be taken care before we start event correlation.    Thanks
hi I use a basic base search like this     <search id="test"> <query>index=toto sourcetype=tutu | fields sam web_hits</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <search b... See more...
hi I use a basic base search like this     <search id="test"> <query>index=toto sourcetype=tutu | fields sam web_hits</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <search base="test"> <query> | stats sum(web_hits)</query> </search>     Splunk says that if we dont use a transforming command like stats, chart and timechart you can lose events if there is more than 500000 events Event retention If the base search is a non-transforming search, the Splunk platform retains only the first 500,000 events that it returns. A post-process search does not process events in excess of this 500,000 event limit, silently ignoring them. This can generate incomplete data for the post-process search. This search result retention limit matches the max_count setting in limits.conf. The setting defaults to 500,000. Does it means that in my example I am sure to dont lose events because i use stats which is transforming commands? I have a misunderstanding because I use also a base search with timechart which is also a transforming command but my timechart is incomplet because I have more than 500000 events   <search id="test"> <query>index=tutu sourcetype="toto" | fields ica_latency_last_recorded ica_latency_session_avg idle_sec</query> <earliest>-7d@h</earliest> <latest>now</latest> </search> <search base="test"> <query> | search idle_sec &lt; 300 | timechart span=1d avg(ica_latency_session_avg) as "Latence moyenne de la session (ms)"</query> </search>   so is somebody can clarify please?
Hi,   I have the bellow search which works out the successes, failures, success_rate, failure_rate and total however I would like to add a field to work out the amount of minutes the failure rate i... See more...
Hi,   I have the bellow search which works out the successes, failures, success_rate, failure_rate and total however I would like to add a field to work out the amount of minutes the failure rate is above a certain threshold for example 20% failure rate however unsure how to do that: index="main" source="C:\\inetpub\\logs\\LogFiles\\*" |eval Time = (time_taken/1000)|eval status=case(Time>20,"TimeOut",(sc_status!=200),"HTTP_Error",true(),"Success")|stats sum(Time) as sum_sec,max(Time) as max_sec,count by status,sc_status,host,_time|chart sum(count) by host,status| addcoltotals labelfield=host label="(TOTAL)"| addtotals fieldname=total|eval successes=(total-(timeout+HTTP_Error))|eval failures=(TimeOut+HTTP_Error)|eval success_rate=round((successes/total)*100,2)|eval failure_rate=round((failures/total)*100,2)|table successes failures success_rate failure_rate total   Any help would be greatly appreciated.   Thanks,   Joe
Is there a way to find which forwarder a devices event logs came from. I have hundreds of devices sending WEC logs through WEC servers, I could really do with an easy method to pinpoint where they c... See more...
Is there a way to find which forwarder a devices event logs came from. I have hundreds of devices sending WEC logs through WEC servers, I could really do with an easy method to pinpoint where they came from during search time. Something like Index=wec_index | ctable hosts,  WECSvr
Hi,   I have a splunk query which results the two outputs (using table) such as "JOB_NAME" and "JOB_ID".    For example, the output values are 'job_name' is  'abcd' and 'job_id' is '456'.  The fina... See more...
Hi,   I have a splunk query which results the two outputs (using table) such as "JOB_NAME" and "JOB_ID".    For example, the output values are 'job_name' is  'abcd' and 'job_id' is '456'.  The final output i would like to get is  "abcd-456". How can i update the splunk query to merge two outputs as one ?   Thanks.     
Hi All, Can someone help to build a search to check for Total_login_Failures  > 10 (per 24H) OR  Number of Failures per user > 5?  Both conditions need to be in same search and an alert will fire wi... See more...
Hi All, Can someone help to build a search to check for Total_login_Failures  > 10 (per 24H) OR  Number of Failures per user > 5?  Both conditions need to be in same search and an alert will fire with either one is met. My search so far        ( index = index1 "Failed password") earliest=-1d | eventstats count as Per_User_failures by user | stats latest(_time) as _time, values(host), values(dest_ip), values(src_ip), dc(src_ip) as srcIpCount, values(user), dc(user) as userCount, count as Total_failures by src_ip dest | rename values(*) as * | where Total_failures>=10 AND Per_user_Failures>5        
I was wondering what, i.e., the following means : 24 physical cores or 48 vcores . does that mean for a virtual environment I need double the physical cores to be vcores (physical cores = vcores) or ... See more...
I was wondering what, i.e., the following means : 24 physical cores or 48 vcores . does that mean for a virtual environment I need double the physical cores to be vcores (physical cores = vcores) or is there some kind of relation I can deduce the amount of physical cores for a virtual environment from?  Also, if a virtual environment would be prefered partially because it would save up the amount of hardware needed (physical racks) and if a virtual server then would need to host two servers witch 24vcores each as a recommendation, would that mean the physical server would need 48 physical cores to provide those vcores for both machines? thanks a lot for clearing this up. I didn't find clear information in splunk docs and in the community as of now. 
Good Morning, I am trying to create an alert to indicate that data has stopped flowing to a specific index and host after 24 hours, once created, the alert would continuously trigger. but only alerte... See more...
Good Morning, I am trying to create an alert to indicate that data has stopped flowing to a specific index and host after 24 hours, once created, the alert would continuously trigger. but only alerted after a new occurrence of data was received. My current settings for my alert are as follows: ------------------------------- Settings Alert Safelnk DOM East - No Data > 24 hours Description   Search   | metadata type=hosts index="index Hidden" |     where host="Hostname_Hidden" | eval age=abs((recentTime-now())) | where age     >86400| table host recentTime age | convert ctime(recentTime)   Scheduled: run everyday At: 10:00 Expires: 999 hour(s) Trigger Conditions Trigger alert when Number of Results is Greater than: 0   Trigger: Once For each result Throttle:   Suppress results containing field value:   Suppress triggering for: 12 hour(s) Trigger Actions: When triggered: Email is sent ------------------------------- data that is sent when triggered: host                                                  recentTime                                    age "Hostname_Hidden"                11/08/2021                                  386