All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am using Splunk Connect for Kubernetes on EKS which seems to capture most logs right out of the box. What it doesn't capture are interactive commands run from within containers. For example, if I "... See more...
I am using Splunk Connect for Kubernetes on EKS which seems to capture most logs right out of the box. What it doesn't capture are interactive commands run from within containers. For example, if I "exec" into a container and run commands, none of that seems to be logged. Has anyone configured that level of auditing before or am I missing something in the default set up that should be capturing that?
Install the app and the add-on got data that can be searched and fits the settings in the app. I can run the settings preview and get data back but, when I open the app the page loads, the configura... See more...
Install the app and the add-on got data that can be searched and fits the settings in the app. I can run the settings preview and get data back but, when I open the app the page loads, the configuration pop-up fires, then it goes blank. The page goes completely blank. Menu bar is still there and active. Other pages load, via the menu bar but go to waiting for data or no data found. Anyone else with this problem?
Hi, I want to change the charts in my splunk dashboards, I do not want to use the preinstalled column and bar graphs, can anyone suggest me other graphs which can be used instead and with stats com... See more...
Hi, I want to change the charts in my splunk dashboards, I do not want to use the preinstalled column and bar graphs, can anyone suggest me other graphs which can be used instead and with stats command.
Hello everyone, We just integrate Splunk with McAfee ePO via DB Connect. We're trying to get some informations from ePO, but, the default queries on it is just about antivirus. Is there any ... See more...
Hello everyone, We just integrate Splunk with McAfee ePO via DB Connect. We're trying to get some informations from ePO, but, the default queries on it is just about antivirus. Is there any query template that I can use to get informations from ePO? Thanks
I got regular question from auditors. we have 100 machines, Machine1 Machine2 .. .. Machine100 and auditor asked to run/search one year old data for the 'machine34'. I did search by using ... See more...
I got regular question from auditors. we have 100 machines, Machine1 Machine2 .. .. Machine100 and auditor asked to run/search one year old data for the 'machine34'. I did search by using * host=machine34 and manually i selected 2019 March If data is there i am fine, but unfortunately data is not showing. Because the machine34 was build 2 months back. It took 2 hours to find the solutions for this. So..., My question is possible to see build date or 1st contact date of machine by using splunk. I am using below code to view the all machines | metadata type=hosts index=* | stats count by host I am looking for another field, that is build date or contacting to splunk date. Will it be possible ? Thanks in advance.
Hi Experts, In this search i want to fetch results only from last 30 days to current. taken_date is one of the field which has got date&time. max_col1 has got the highest(latest) date valu... See more...
Hi Experts, In this search i want to fetch results only from last 30 days to current. taken_date is one of the field which has got date&time. max_col1 has got the highest(latest) date value. So i want to keep max_col1_30 ,to highest(latest) date value - 30 days . So max_col1_30 will have 30 days old date value. I want to keep max_col1_30 always to CONSTANT,in this case it should always retain 1574016298.000000 . So that where col1 > max_col1_30 returns events that are only from last 30 days to current. base search from JSON.. | eval col1=strptime(taken_date,"%b %d %Y %H:%M:%S") | stats max(col1) as max_col1 by col1 | eval max_col1_30= max_col1-2629743 | where col1 > max_col1_30 | table col1 max_col1_30 Results: col1 | max_col1_30 ------------------------------------- 1576545224.000000 | 1573915481.000000 1576646041.000000 | 1574016298.000000 Thank you.
I am new to splunk dashboard and trying to create a generic tooltip which will apply to all panels. I have created a two sample panels with their ID's panel1 & panel2. For one panel using jquery I am... See more...
I am new to splunk dashboard and trying to create a generic tooltip which will apply to all panels. I have created a two sample panels with their ID's panel1 & panel2. For one panel using jquery I am able to implement tooltip in a such way that below code should take any panel id in dynamic way require([ "splunkjs/mvc", "splunkjs/mvc/tokenutils", "jquery", "splunkjs/mvc/searchmanager", "splunkjs/ready!", "splunkjs/mvc/simplexml/ready!" ], function( mvc, TokenUtils, $, SearchManager ) { $(function() { $("#panel1").on("mouseover",function(){ var tokens = mvc.Components.get("submitted"); tokens.set("tokToolTipShow1", "true"); }); }); $(function() { $("#panel1").on("mouseout",function(){ var tokens = mvc.Components.get("submitted"); tokens.unset("tokToolTipShow1"); }); }); } );
i have a pdf/word file in splunk server. i want to open the file in new tab when a user clicks the link.i tried with HTML and it was not working. <a href="h:/helpfile/newdocument/document.pdf" ta... See more...
i have a pdf/word file in splunk server. i want to open the file in new tab when a user clicks the link.i tried with HTML and it was not working. <a href="h:/helpfile/newdocument/document.pdf" target="_blank">help with data</a>
title is bit confusing but I have a data like the below date,assetname,assetIP 2020/05/05 10:00:00,esprbtrapmgr1,195.187.11.144 2020/05/05 10:00:00,nxc-webap2,10.186.36.196 2020/05/05 10:00:00,e... See more...
title is bit confusing but I have a data like the below date,assetname,assetIP 2020/05/05 10:00:00,esprbtrapmgr1,195.187.11.144 2020/05/05 10:00:00,nxc-webap2,10.186.36.196 2020/05/05 10:00:00,eytocesxc7p15,10.16.22.186 2020/05/05 10:00:00,eytocesxc7p15,10.16.22.18 2020/05/05 10:00:00,eytocesxc7p15,10.16.26.98 2020/05/05 10:00:00,aktocesxc16p08,10.16.26.21 2020/05/05 10:00:00,aktocesxc16p08,10.16.56.23 and I want a table like any suggestions ? assetname| assetIP |assetIP2 |assetIP3 .... esprbtrapmgr1|195.187.11.144 eytocesxc7p15 |10.16.22.186|10.16.22.18|10.16.26.98| aktocesxc16p08|10.16.26.21|10.16.56.23
I have a json event with an id which I want to anonymize. However, I have to be able to perform stats/count/grouping and other analytics on this id later. In short, I want to hide this id for the use... See more...
I have a json event with an id which I want to anonymize. However, I have to be able to perform stats/count/grouping and other analytics on this id later. In short, I want to hide this id for the users but should be able to be used internally by Splunk. Is this possible? My event looks something like this: {"duration":0.33,"a":"login","i":"50050","d":"2055502349","c":"LIVE","@timestamp":"2020-05-22T01:59:59.601Z"} I want to anonymize "d" id.
Good afternoon fellow splunkthiasts, I need your help with data anonymization. Situation: Application on server with UFW produces a log - most of it is a boring operational stuff, however certain r... See more...
Good afternoon fellow splunkthiasts, I need your help with data anonymization. Situation: Application on server with UFW produces a log - most of it is a boring operational stuff, however certain records contain a field considered to be sensitive. Log records are necessary for ordinary Ops admins (who need to see all records, but don't need to see the actual sensitive field value) and privileged troubleshooters, who need to see the sensitive data, too. Architecture: data is produced on a server with UFW, will be stored on indexer cluster and there is one heavy-forwarder available in my deployment. Limitations: 1. Due to limited bandwidth between UFW and Splunk servers, it is preferred not to increase volume of data transferred from UFW (bandwidth between HFW and indexers is fine). 2. Due to time-constrained validity of the sensitive field, delays introduced by search->modify->index again every few minutes are not acceptable. 3. Indexing the sensitive records twice is OK. Indexing whole log twice would be too expensive fun. Proposed solution: UFW will forward the log to heavy-forwarder where it should be duplicated. One copy of the data should be anonymized and forwarded to index "operational", while the other one should be filtered (only records with sensitive field are kept) and then forwarded to index "sensitive". Problem: I know how to route data, how to anonymize data, how to filter data before routing, but I am not sure how to connect the dots in described manner. To be specific, I don't know how to duplicate the data on HFW and make sure each copy is treated differently. Can you help, or possibly propose some better solution?
Hi All- We have a problem where our SHC Captain seems to stop responding. In looking at netstat and the splunkd logs there are a bunch of CLOSE_WAIT connections that just persist for netstat. In ad... See more...
Hi All- We have a problem where our SHC Captain seems to stop responding. In looking at netstat and the splunkd logs there are a bunch of CLOSE_WAIT connections that just persist for netstat. In addition in the logs, I see a bunch of errors such as "HttpListener ..... max thread limit for REST HTTP server is 5333." So, the captain fails to respond to requests and then the cluster just stops working all together. I would have thought the other members would automatically determine a new captain (we have 5 hosts in total for SHC). To remedy I end up having to totally reboot the captain. This brings things back but obviously we want this setup to be more resilient. I am hesitant to up the server.conf limits for threads because it seems like it will just continue growing. Ideally, we'd see the failure occur on the captain, and it would just get transferred to another host. Does anyone have any troubleshooting suggestions or remedies? Thanks!
i have installde splunk cloud gateway and succcessdully see all the dashboard in my splunk enterprise,but when i click one of my dashboard ,it says "visualization could not be displayed" what's the p... See more...
i have installde splunk cloud gateway and succcessdully see all the dashboard in my splunk enterprise,but when i click one of my dashboard ,it says "visualization could not be displayed" what's the problem ? thanks!
Hi, I have created one index of size 500GB( maxTotalDataSizeMb ) and also included frozen path where data will get stored after 500Gb data gets completed. Now I want to know below- 1. If I Increa... See more...
Hi, I have created one index of size 500GB( maxTotalDataSizeMb ) and also included frozen path where data will get stored after 500Gb data gets completed. Now I want to know below- 1. If I Increased size of that index to 1TB then is there any risk involved of data gets deleted? 2. After changing to 1TB If I consider to reduce data size to 800 Gb then remaining 200Gb data will go to frozen path? is there any risk involved/any precaution neeeds to be taken to avoid data loss? 3. If I want to move frozen bucket to be searchable then I copied particular frozen bucket to thawed path and then after data retention those buckets moved to frozen path then will I have duplicate buckets? so I need to move frozen bucket to thawed path instead of copying it? thanks,
Hello all, I am trying to configure a service in ITSI with two KPIS ("A", "B") with the most severity (11). When I do the test inside the configuration of the service it works perfectly, if any... See more...
Hello all, I am trying to configure a service in ITSI with two KPIS ("A", "B") with the most severity (11). When I do the test inside the configuration of the service it works perfectly, if any of these KPIS get critical status, global health score of the service gets 0 (critical). But in the practice, when the service is running with real data, only when "A" gets critical, global health score of the service changes to critical, but it does not happens with "B". I seems that system gives more importance to "A" than "B". I would like to ask if someone is facing the same issue, if someone knows that this is a real limitation of ITSI (although in testing mode when setting configuration of service it works well) and if someone has a solution for this. Thanks very much in advance for the help,
Hello, I want to find out which dashboards take a long time to load. So I would like to have a table which shows the runtimes/searchtimes for all dashboards being opened by any user. It should lo... See more...
Hello, I want to find out which dashboards take a long time to load. So I would like to have a table which shows the runtimes/searchtimes for all dashboards being opened by any user. It should look something like this: time dashboard user runtime (in seconds) 2020-05-22 10:02:00 sample_dashboard admin 15 2020-05-22 10:01:00 sample_dashboard admin 20 2020-05-22 10:00:00 sample_dashboard2 user 5 I found two answers from 2016 and 2017 which do not work. (The first one returns empty and the second one lists searches instead of dashboards.) https://answers.splunk.com/answers/425215/how-can-i-measure-the-dashboard-load-time.html https://answers.splunk.com/answers/488539/how-to-write-a-search-to-find-out-the-average-dash.html Can anybody help?
Hi Splunkers, We need to schedule alert for Every month on 2nd Wednesday,Thursday and Friday on @11 am. I have tried with below Cron Expression but i didn't get exact results. Cron Expression: 0 ... See more...
Hi Splunkers, We need to schedule alert for Every month on 2nd Wednesday,Thursday and Friday on @11 am. I have tried with below Cron Expression but i didn't get exact results. Cron Expression: 0 11 8-14 * 3-4 Please provide proper cron expression.
As the security search report,Security team find out port 8089 had a some of the security issue here. How I can disable SSL 2.0 and 3.0 using TLS1.1 or higher instead.
I tried to difference between 2 dates. It is not working properly. Here is my query, index=s_iss sourcetype=S_AD | fillnull value="" |eval Last_Date="2019-09-28 17:09:19.0"|eval _time="2019-05-... See more...
I tried to difference between 2 dates. It is not working properly. Here is my query, index=s_iss sourcetype=S_AD | fillnull value="" |eval Last_Date="2019-09-28 17:09:19.0"|eval _time="2019-05-21 4:55:00.143" | eval Last_Date=strftime(strptime(Last_Date,"%Y-%m-%d %H:%M:%S.%Q"),"%Y-%m-%d") | eval _time = strptime(_time, "%Y-%m-%d") | eval diff = ( _time - Last_Date)|stats count by Name,Last_Date,_time,diff I need the time difference between Last_date and now() and display as Date. Can someone help me out.
When I run this SPL, the transaction commands gives the correct output index=* source=/var/log/secure* (TERM(sudo) AND (TERM(adduser) OR TERM(chown) OR TERM(userdel) OR TERM(chmod) OR TERM(userm... See more...
When I run this SPL, the transaction commands gives the correct output index=* source=/var/log/secure* (TERM(sudo) AND (TERM(adduser) OR TERM(chown) OR TERM(userdel) OR TERM(chmod) OR TERM(usermod) OR TERM(useradd)) AND COMMAND!="*egrep*") OR (TERM(sshd) AND "Accepted password" AND TERM(from) AND TERM(port)) | regex _raw != ".*bin\/grep|.*bin\/man|.*bin\/which|.*bin\/less|.*bin\/more" | rex field=_raw "(?<=sudo:)\s*(?P<Users>[[:alnum:]]\S*[[:alnum:]])\s*(?=\:).*(?<=COMMAND\=)(?P<command>.*)" | rex field=_raw "(?<=for)\s*(?P<Users>[[:alnum:]]\S*[[:alnum:]])\s*(?=from).*(?<=from)\s*(?P<ip>[[:digit:]]+\.[[:digit:]]+\.[[:digit:]]+\.[[:digit:]]+)" | eval "Command/Events" = replace(command,"^(\/bin\/|\/sbin\/)","") | eval Time = if(match(_raw,"(?<=sudo:)\s*[[:alnum:]]\S*[[:alnum:]]\s*(?=\:).*(?<=COMMAND\=)*") ,strftime(_time, "%Y-%d-%m %H:%M:%S"),null()) | eval Date = strftime(_time, "%Y-%d-%m") | eval "Report ID" = "ABLR-007" | eval "Agency HF" = if(isnull(agencyhf),"",agencyhf) | rename host as Hostname, index as Agency | transaction Date Hostname Users Agency startswith="sshd" maxevents=-1 keepevicted=true | regex _raw = ".*sshd\:\n.*sudo\:|.*sudo\:" Result: Then when I tabulate the data using the SPL below, the time is wrong index=* source=/var/log/secure* (TERM(sudo) AND (TERM(adduser) OR TERM(chown) OR TERM(userdel) OR TERM(chmod) OR TERM(usermod) OR TERM(useradd)) AND COMMAND!="*egrep*") OR (TERM(sshd) AND "Accepted password" AND TERM(from) AND TERM(port)) | regex _raw != ".*bin\/grep|.*bin\/man|.*bin\/which|.*bin\/less|.*bin\/more" | rex field=_raw "(?<=sudo:)\s*(?P<Users>[[:alnum:]]\S*[[:alnum:]])\s*(?=\:).*(?<=COMMAND\=)(?P<command>.*)" | rex field=_raw "(?<=for)\s*(?P<Users>[[:alnum:]]\S*[[:alnum:]])\s*(?=from).*(?<=from)\s*(?P<ip>[[:digit:]]+\.[[:digit:]]+\.[[:digit:]]+\.[[:digit:]]+)" | eval "Command/Events" = replace(command,"^(\/bin\/|\/sbin\/)","") | eval Time = if(match(_raw,"(?<=sudo:)\s*[[:alnum:]]\S*[[:alnum:]]\s*(?=\:).*(?<=COMMAND\=)*") ,strftime(_time, "%Y-%d-%m %H:%M:%S"),null()) | eval Date = strftime(_time, "%Y-%d-%m") | eval "Report ID" = "ABLR-007" | eval "Agency HF" = if(isnull(agencyhf),"",agencyhf) | rename host as Hostname, index as Agency | transaction Date Hostname Users Agency startswith="sshd" maxevents=-1 keepevicted=true | regex _raw = ".*sshd\:\n.*sudo\:|.*sudo\:" | fields "Report ID" Time Agency Command/Events Hostname Users ip "Agency HF" | rename ip as "IP Address" | eval multivalue_fields = mvzip(Time,'Command/Events') | mvexpand multivalue_fields | makemv multivalue_fields delim="," | eval Time=mvindex(multivalue_fields , 0) | eval "Command/Events"=mvindex(multivalue_fields , 1) | table "Report ID" Time Agency Command/Events Hostname Users "IP Address" "Agency HF" Result ** I also have tried using stats, yes I could combine the data with it. However, I can't use that as I would need to rely heavily on list(). the data sometimes exceeds 100 and the customer does not want me to touch limits.conf so I changed to transaction instead.