All Topics

Top

All Topics

Hello, We are seeing the below error after our linux upgrade, Could someone please help us fix this issue? Unable to initialize modular input "server" defined in the app "splunk_app_db_connect": In... See more...
Hello, We are seeing the below error after our linux upgrade, Could someone please help us fix this issue? Unable to initialize modular input "server" defined in the app "splunk_app_db_connect": Introspecting scheme=server: script running failed (exited with code 127). When i saw this issue last time, i have set the JAVA_HOME path in splunk-launch.conf file, it worked then but we are seeing this error again  Please help me out.   Thanks
Hello Splunk Community  I have managed to use REST to add some columns from my CSV files. However, not all the columns are uploaded. I am using the below script:  | rest /servicesNS/-/-/data/lookup... See more...
Hello Splunk Community  I have managed to use REST to add some columns from my CSV files. However, not all the columns are uploaded. I am using the below script:  | rest /servicesNS/-/-/data/lookup-table-files search="*_Weather.Lookups.csv" But the issue is that I am not getting all the fields associated with *_Weather.Lookups.csv anybody out there  know how I can het all the fields or a specific field from the lookup ? Thanks in advance. 
Hello, I needs efforts in Hrs for event correlation to be implemented for a general application considering data from 5 to 6 sources, such as Appdynamics, OEM, splunk itself and others. And are the... See more...
Hello, I needs efforts in Hrs for event correlation to be implemented for a general application considering data from 5 to 6 sources, such as Appdynamics, OEM, splunk itself and others. And are there any pre-requisite to be considered for event correlation. What are main things to be taken care before we start event correlation.    Thanks
hi I use a basic base search like this     <search id="test"> <query>index=toto sourcetype=tutu | fields sam web_hits</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <search b... See more...
hi I use a basic base search like this     <search id="test"> <query>index=toto sourcetype=tutu | fields sam web_hits</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <search base="test"> <query> | stats sum(web_hits)</query> </search>     Splunk says that if we dont use a transforming command like stats, chart and timechart you can lose events if there is more than 500000 events Event retention If the base search is a non-transforming search, the Splunk platform retains only the first 500,000 events that it returns. A post-process search does not process events in excess of this 500,000 event limit, silently ignoring them. This can generate incomplete data for the post-process search. This search result retention limit matches the max_count setting in limits.conf. The setting defaults to 500,000. Does it means that in my example I am sure to dont lose events because i use stats which is transforming commands? I have a misunderstanding because I use also a base search with timechart which is also a transforming command but my timechart is incomplet because I have more than 500000 events   <search id="test"> <query>index=tutu sourcetype="toto" | fields ica_latency_last_recorded ica_latency_session_avg idle_sec</query> <earliest>-7d@h</earliest> <latest>now</latest> </search> <search base="test"> <query> | search idle_sec &lt; 300 | timechart span=1d avg(ica_latency_session_avg) as "Latence moyenne de la session (ms)"</query> </search>   so is somebody can clarify please?
Hi,   I have the bellow search which works out the successes, failures, success_rate, failure_rate and total however I would like to add a field to work out the amount of minutes the failure rate i... See more...
Hi,   I have the bellow search which works out the successes, failures, success_rate, failure_rate and total however I would like to add a field to work out the amount of minutes the failure rate is above a certain threshold for example 20% failure rate however unsure how to do that: index="main" source="C:\\inetpub\\logs\\LogFiles\\*" |eval Time = (time_taken/1000)|eval status=case(Time>20,"TimeOut",(sc_status!=200),"HTTP_Error",true(),"Success")|stats sum(Time) as sum_sec,max(Time) as max_sec,count by status,sc_status,host,_time|chart sum(count) by host,status| addcoltotals labelfield=host label="(TOTAL)"| addtotals fieldname=total|eval successes=(total-(timeout+HTTP_Error))|eval failures=(TimeOut+HTTP_Error)|eval success_rate=round((successes/total)*100,2)|eval failure_rate=round((failures/total)*100,2)|table successes failures success_rate failure_rate total   Any help would be greatly appreciated.   Thanks,   Joe
Is there a way to find which forwarder a devices event logs came from. I have hundreds of devices sending WEC logs through WEC servers, I could really do with an easy method to pinpoint where they c... See more...
Is there a way to find which forwarder a devices event logs came from. I have hundreds of devices sending WEC logs through WEC servers, I could really do with an easy method to pinpoint where they came from during search time. Something like Index=wec_index | ctable hosts,  WECSvr
Hi,   I have a splunk query which results the two outputs (using table) such as "JOB_NAME" and "JOB_ID".    For example, the output values are 'job_name' is  'abcd' and 'job_id' is '456'.  The fina... See more...
Hi,   I have a splunk query which results the two outputs (using table) such as "JOB_NAME" and "JOB_ID".    For example, the output values are 'job_name' is  'abcd' and 'job_id' is '456'.  The final output i would like to get is  "abcd-456". How can i update the splunk query to merge two outputs as one ?   Thanks.     
Hi All, Can someone help to build a search to check for Total_login_Failures  > 10 (per 24H) OR  Number of Failures per user > 5?  Both conditions need to be in same search and an alert will fire wi... See more...
Hi All, Can someone help to build a search to check for Total_login_Failures  > 10 (per 24H) OR  Number of Failures per user > 5?  Both conditions need to be in same search and an alert will fire with either one is met. My search so far        ( index = index1 "Failed password") earliest=-1d | eventstats count as Per_User_failures by user | stats latest(_time) as _time, values(host), values(dest_ip), values(src_ip), dc(src_ip) as srcIpCount, values(user), dc(user) as userCount, count as Total_failures by src_ip dest | rename values(*) as * | where Total_failures>=10 AND Per_user_Failures>5        
I was wondering what, i.e., the following means : 24 physical cores or 48 vcores . does that mean for a virtual environment I need double the physical cores to be vcores (physical cores = vcores) or ... See more...
I was wondering what, i.e., the following means : 24 physical cores or 48 vcores . does that mean for a virtual environment I need double the physical cores to be vcores (physical cores = vcores) or is there some kind of relation I can deduce the amount of physical cores for a virtual environment from?  Also, if a virtual environment would be prefered partially because it would save up the amount of hardware needed (physical racks) and if a virtual server then would need to host two servers witch 24vcores each as a recommendation, would that mean the physical server would need 48 physical cores to provide those vcores for both machines? thanks a lot for clearing this up. I didn't find clear information in splunk docs and in the community as of now. 
Good Morning, I am trying to create an alert to indicate that data has stopped flowing to a specific index and host after 24 hours, once created, the alert would continuously trigger. but only alerte... See more...
Good Morning, I am trying to create an alert to indicate that data has stopped flowing to a specific index and host after 24 hours, once created, the alert would continuously trigger. but only alerted after a new occurrence of data was received. My current settings for my alert are as follows: ------------------------------- Settings Alert Safelnk DOM East - No Data > 24 hours Description   Search   | metadata type=hosts index="index Hidden" |     where host="Hostname_Hidden" | eval age=abs((recentTime-now())) | where age     >86400| table host recentTime age | convert ctime(recentTime)   Scheduled: run everyday At: 10:00 Expires: 999 hour(s) Trigger Conditions Trigger alert when Number of Results is Greater than: 0   Trigger: Once For each result Throttle:   Suppress results containing field value:   Suppress triggering for: 12 hour(s) Trigger Actions: When triggered: Email is sent ------------------------------- data that is sent when triggered: host                                                  recentTime                                    age "Hostname_Hidden"                11/08/2021                                  386  
Hello guys, I will first mention that I'm pretty new to Splunk, but I noticed that Splunk has stopped indexing and getting logs from my kali VM and desktop host whose logs go to index=main. Basicall... See more...
Hello guys, I will first mention that I'm pretty new to Splunk, but I noticed that Splunk has stopped indexing and getting logs from my kali VM and desktop host whose logs go to index=main. Basically, Splunk logs are being indexed properly as well as other indexes, however the "main" index doesn't log since 2 days ago (08/11/21).  I tried searching for errors in splunkd but I'm not as proficient so I hope you anybody here can help me understand what the problem is. Thank you!     
I need to extract the image name from a field, but I'm not getting it using the rex. Can you help me identify what the error is? When testing regex via website regex101 is functional. index=teste  |... See more...
I need to extract the image name from a field, but I'm not getting it using the rex. Can you help me identify what the error is? When testing regex via website regex101 is functional. index=teste  | rex field=_raw "kubernetes_container_image: (?<container>.*)"   app: teste-app cluster_account: teste-prod kubernetes_container_image: rw-tested-001 app: teste-app2 cluster_account: teste-homolog kubernetes_container_image: 1232ds-teste--002 app: teste-app3 cluster_account: teste-prod kubernetes_container_image: rwteste-003 app: teste-app4 cluster_account: teste-homolog kubernetes_container_image: teste-001 app: teste-app5 cluster_account: teste-prod kubernetes_container_image: teste-001 app: teste-app6 cluster_account: teste-homolog kubernetes_container_image: teste-001    
Splunk Enterprise v8.21 and the dashboard Export function is still broken when using savedsearch.   What is the timeline for getting this fixed?
Hello, We encounter this type of message in the Splunk Serch Head, which causes the restart of the splunk service and the delays at the level of the Splunk web login page, dashbords display. Have y... See more...
Hello, We encounter this type of message in the Splunk Serch Head, which causes the restart of the splunk service and the delays at the level of the Splunk web login page, dashbords display. Have you encountered this type of problem? if so what is it (this type of messag is not clear)? Splunk Entreprise, Search Head 8.x.x   Thank you [build 545206cc9f70] 2021-08-27 11:21:24 Received fatal signal 11 (Segmentation fault). Cause: No memory mapped at address [0x0000000000000008]. Crashing thread: BundleReplicatorThread Registers: RIP: [0x00007F5BF348CC24] ? (libjemalloc.so.2 + 0x11C24) Sep 21 10:13:16 prpgv-splksh01c kernel: [4960554.728167] splunkd[26359]: segfault at 8 ip 00007f2b51845c24 sp 00007f2b279fc868 error 6 in libjemalloc.so.2[7f2b51834000+49000]  
Hi  I am starting to work with dashboards in the Splunk Dashboard Studio Application(Splunk cloud) I need to increase the font size of text inside the table. Can anyone please help in this? "ds_se... See more...
Hi  I am starting to work with dashboards in the Splunk Dashboard Studio Application(Splunk cloud) I need to increase the font size of text inside the table. Can anyone please help in this? "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "My Query, "queryParameters": { "earliest": "-15m", "latest": "now" } } },
Hi Team,   Please give some suggestion on how to monitor citrix xenserver for CPU and memory. 
Im working with JSON data and the structure is as per the below   data: { [-] application: { [+] } completedAt: 1636133794444 environments: [ [-] { [-] id: XNu1-... See more...
Im working with JSON data and the structure is as per the below   data: { [-] application: { [+] } completedAt: 1636133794444 environments: [ [-] { [-] id: XNu1-l8oROOOSM5gpoSR0g } { [-] id: _LY0B7VpRq64tHXq7Uy55A } { [-] id: 7KbvgSBMSUSUyAn2hMXSQA } { [-] id: dJ7EuItjSG2M47-zvIvimQ } ]   Now when i use a case function for this like the below: |eval env = case('data.environments{}.id'=="7KbvgSBMSUSUyAn2hMXSQA", "prd-au", 'data.environments{}.id'=="_LY0B7VpRq64tHXq7Uy55A", "prd-gb") It only ever brings me back 1 result and thats whatever is placed first in the case function so the above returns prd-au and if i swap the values around then it will return prd-gb. I presume this is something to do with how the JSON data is working with splunk causing it to error out but unsure how to resolve? any ideas? 
I have a javascript that I will be invoking from a dashboard to perform validation on a field input of user input, such that the field shouldn't contain any doublequotes OR shouldn't contain any padd... See more...
I have a javascript that I will be invoking from a dashboard to perform validation on a field input of user input, such that the field shouldn't contain any doublequotes OR shouldn't contain any padded spaces in the beginning or the end of the string.  Need help with the regex to match the above condition. The script looks like this , <form script="field_validation.js"> <label>Url Validation</label> <fieldset submitButton="false"> <input type="text" token="tkn_fld" id="tkn_fld_id"> <label>URL</label> </input> </fieldset> </form> ==================================== field_validation.js  require([ 'underscore', 'splunkjs/mvc', 'jquery', "splunkjs/mvc/simplexml/ready!" ], function(_, mvc, $) { var tkn_url = splunkjs.mvc.Components.getInstance("tkn_fld_id");  tkn_fld.on("change", function(e) { console.log(e) // e.preventDefault(); if (!isUrlValid(e)) { alert("Enter Valid URL") return false; } }) function isUrlValid(userInput) { console.log(userInput) var res = userInput.match( NEED HELP TO WRITE THE REGEX HERE ); if (res == null) return false; else return true; } })
Actually I created several dashboards in splunk using chart command to look at aggregation w.r.t multiple fields and it was working. and after my backend team updated the splunk plugins like  https:... See more...
Actually I created several dashboards in splunk using chart command to look at aggregation w.r.t multiple fields and it was working. and after my backend team updated the splunk plugins like  https://splunkbase.splunk.com/app/3117/ https://splunkbase.splunk.com/app/3137/ , none of the dashboards are working where I used "chart" command.. Pls let me know how can I solve this..  Thanks in advance
We are collecting Syslog and Windows Event log information in Azure Log Analytics. Also we're using the Splunk Addon for Microsoft Cloud Service for transferring AD Audit logs to Splunk via Event Hu... See more...
We are collecting Syslog and Windows Event log information in Azure Log Analytics. Also we're using the Splunk Addon for Microsoft Cloud Service for transferring AD Audit logs to Splunk via Event Hub. Does the Addon support the import of Syslog logs via event hub or will they not be parsed properly?  Any other best practices for transferring these type of data? The IT don't want to install any additional agents.