All Topics

Top

All Topics

Dears,   Is there a way to send the dashboard results by use CSV file rather than PDF?   Regards
I have 2 partitions on my centos. The first one is 20 GB and mounted on / and the second one is 300 GB and mounted on /opt. Splunk service is in /opt/splunk directory  and i have 300 free on this par... See more...
I have 2 partitions on my centos. The first one is 20 GB and mounted on / and the second one is 300 GB and mounted on /opt. Splunk service is in /opt/splunk directory  and i have 300 free on this partition. but i got disk space warning on / partition.  please help!
Hello everyone, In Splunk GUI when i run health check its showing one error like One or more source types has been found to present events in the future. All the sources are giving the correct time... See more...
Hello everyone, In Splunk GUI when i run health check its showing one error like One or more source types has been found to present events in the future. All the sources are giving the correct timestamp with timezone UTC +0:00 but when i checked the devices that are configured with the source types with the error, the devices are in the other timezone i.e UTC +08:00, and we are receiving the logs that are also in the future timezone. so how can i overcome this problem with the future timestamp. the Splunk indexer time zone is UTC +0:00 please refer the screenshot. Thanks in advance............
How does Splunk calculate Time to Triage, what data does it use? e.g. time an event occurred and time the event was put modified or put in pending etc.?
Hi  I want to study Splunk alert investigations. I mean I want to see the alerts and then Investigate them. The process of the alert investigations. But I didnt find any resources. Can you help? 
Hi All, I'm trying to find the credit card details in the logs with all in one regex expression. But I was also getting some other data too like timestamp data as it has more than 12digits and some r... See more...
Hi All, I'm trying to find the credit card details in the logs with all in one regex expression. But I was also getting some other data too like timestamp data as it has more than 12digits and some random data. Just bit exhausted with this thing here. Is there any possible solution to find the credit card numbers directly that will not contains random numbers or time stamps. Help me with the query if possible. Thanks in advance.
I have the stores and I want to check the status of store whether it is up or down  i want to show the status with help of  processes   Processes.csv lookup  processes Services DeviceType ... See more...
I have the stores and I want to check the status of store whether it is up or down  i want to show the status with help of  processes   Processes.csv lookup  processes Services DeviceType ax Amazonx controller by buy register  I wrote a query but it is not showing the status up or down |mstats latest_time(value) as _time where (host="*" OR host="t*") index=a_store_metrics And metric_name="process.time" by host process |search process in ("ax","by") |eval host=lower(host) |rex field=host "(?<Device>["\.]+)" |rex field=Device "(?<store>\w{7})" |search [|inputlookup  store_device where store="a01" |field Device |format] |lookup store_device Device OUTPUT Store as storetype DeviceType |where (DeviceType="Controller" OR  DeviceType="Register") AND store="a01" |lookup process.csv  process OUTPUT Services |stats latest(_time) as time by instance store |eval status=if(time!="".,"UP","DOWN") |fields store instance service status  I am getting output store instance service status a01 ax amazon x UP a01 by buy UP   If i off the store it is not showing down it is showing only one instance suppose if I stop the services for by it should show status down in by column but it is not showing entire column as shown below. store instance service status a01 ax amazon x UP         Please help me  out Thank you                                                                          
Please help in suggesting a best way to ingest splunk search results to influxdb. Step by step guide would be appreciated.  
Hi all, I need to create an alert to check a folder has 10 files that are created daily. The tricky bit is the folder name is based on the date. the complete path is  \\TABASIPP\Prod_Data\appr\... See more...
Hi all, I need to create an alert to check a folder has 10 files that are created daily. The tricky bit is the folder name is based on the date. the complete path is  \\TABASIPP\Prod_Data\appr\data\<today's date> and in there, we need to check that 10 files are created by 4pm. the 10 files are  20220530Report1.csv 20220530Report2.csv Fails_30May2022_checked.csv 20220530_total_submissions.csv 20220530_loss_report.csv 30May2022_EOD.csv etc... note that the file names all have the current date, as well as the folder has the date in it as well. Need to check that the files are created by 4pm, and if not, send us an email alert. Is this possible with splunk and how would you do this? thanks for any help in advannce.  
The table's "previous" button works except when I'm going from Page 2 to Page 1 of the results. When I clicked "previous" the results stayed the same. The table's source is as follows: source=<<s... See more...
The table's "previous" button works except when I'm going from Page 2 to Page 1 of the results. When I clicked "previous" the results stayed the same. The table's source is as follows: source=<<source>> | stats count by strategy_name | sort -num(count) | table strategy_name, count | rename strategy_name as "Alert Type", count as Count  
Is it possible to only allow REST API access with token authentication and not username:password? Is there a config to allow certain roles to be able to access the REST API?
I am importing signin logs from azure and I want to built a query which should take input from a csv file (appid) and search logs and display output for number of success and failures of signins pe... See more...
I am importing signin logs from azure and I want to built a query which should take input from a csv file (appid) and search logs and display output for number of success and failures of signins per app
I have created a job to login on a specific web page, it runs perfectly on my local machine, although when I set this job to run on appdynamics synthetic jobs I got an error. Sometimes I receive a ti... See more...
I have created a job to login on a specific web page, it runs perfectly on my local machine, although when I set this job to run on appdynamics synthetic jobs I got an error. Sometimes I receive a timeout error and sometimes I got a "Unable to locate element". I have already tried different timeout periods and many types of element selectors (xpath, css selector, ID ...), locally every selector works fine, the problem happens every time I try to run it on appdynamics...whatelse should I try? 
I am trying to get the total count of a field called ID for earliest and latest time for a particular time range. Assume I am looking for a time range of Mor 8AM to 5PM . I want the count of total fo... See more...
I am trying to get the total count of a field called ID for earliest and latest time for a particular time range. Assume I am looking for a time range of Mor 8AM to 5PM . I want the count of total for a field called "ID" for 8AM TO 9AM and also count from 4PM TO 5PM for field called 'ID" and show what is different if there is a difference in values of ID for hours 8AM TO 9AM and 4PM TO 5PM .   Following is the query I am using index=test | rename "results{}.id" as "id" | bin _time span=1h | stats count(id) as total by _time | delta total as difference | fillnull value=0 |eval status=case(difference=0, "No change", difference<0, "Device(s) Removed" , difference>0 ,"Device(s) Added") | search status!="No change" | rename _time as time | eval time=strftime(time,"%m/%d/%y %H:%M:%S")
Hello everyone. I'm fairly new to Splunk, I've recently joined a job as a security analist in a SOC where I get to use this cool tool. This question is kind of a continuation to my previos post: ht... See more...
Hello everyone. I'm fairly new to Splunk, I've recently joined a job as a security analist in a SOC where I get to use this cool tool. This question is kind of a continuation to my previos post: https://community.splunk.com/t5/Splunk-Search/Help-on-query-to-filter-incoming-traffic-to-a-firewall/m-p/599607/highlight/true#M208701 I had to make a query to do two things: First, look for any potential policy with any ports enabled. Second, find out which of these policies were allowing or teardowning request coming from public IP addresses. For this I came up with this query which does the work imo:   index="sourcedb" sourcetype=fgt_traffic host="external_firewall_ip" action!=blocked | eventstats dc(dstport) as different_ports by policyid | where different_ports>=5 | eval source_ip=if(cidrmatch("10.0.0.0/8", src) OR cidrmatch("192.168.0.0/16", src) OR cidrmatch("172.16.0.0/12", src),"private","public") | where source_ip="public" | eval policy=if(isnull(policyname),policyid,policyid+" - "+policyname) | eval port_list=if(proto=6,"tcp",if(proto=17,"udp","proto"+proto))+"/"+dstport | dedup port_list | table source policy different_ports port_list | mvcombine delim=", " port_list   However, the problem I'm having is that the port list is being shown like if it was one big list, like this: 1 2 3 4 5 I'd like for it to show like this: 1, 2, 3, 4, 5 I've also tried replacing the table command with a stats delim=", " value(port_list) but I've had no success. I'd appreciate if you could give me some insight on how could I solve this, I had in mind trying mvjoin but had no clue on how to approach it. Thanks in advance.
Present scenario:  We have alert " high memory "  detect systems if memory hits the set threshold ( if Committed Memory usage is over 115% ) - Running on schedule every hour at 15 minutes past hour  ... See more...
Present scenario:  We have alert " high memory "  detect systems if memory hits the set threshold ( if Committed Memory usage is over 115% ) - Running on schedule every hour at 15 minutes past hour  Requirement :  This alert give us kind of live result but We need historic data in terms of report so that we can do review at the end of month to understand Host "Alpha" caught into this alert for x times in month , Host "beta" caught into this alert for y times ...so on. Basically need to find particular host how frequently and how many times having high memory.   | mstats avg(_value) AS CommittedMemoryInBytes WHERE index=xyz AND metric_name=Memory.Committed_Bytes by host | join host [search index=abc sourcetype=WHM source=operatingsystem TotalPhysicalMemoryKB=*] | eval PercentCommittedMemory = round( (CommittedMemoryInBytes*pow(2,-30)) / (TotalPhysicalMemoryKB*pow(2,-20) )*100,2) | where PercentCommittedMemory > 115 | table host,PercentCommittedMemory,CommittedMemoryInBytes,TotalPhysicalMemoryKB     Any help will be appreciated.
We had instances where dashboard was not updating. I would like to create an alert if dashboard/panels are not updating. How do i achieve this. Please help
This app is not supported for Splunk Cloud because the jquery versions. Does anyone know if this will be updated or if this app is not longer supported by the contributors from Splunk Works?
I recently added a new SH to our SHC.  Show shcluster-status is good, show kvstore-status is good.  I created some kvstore entries on SH1 and they replicated to SH5 (the new one).  I also see the res... See more...
I recently added a new SH to our SHC.  Show shcluster-status is good, show kvstore-status is good.  I created some kvstore entries on SH1 and they replicated to SH5 (the new one).  I also see the results of scheduled searches that are writting to the kvstore(s) being replicated to the new SH.   I see no errors in splunkd.log.  However, when ever the DMC tries to access /services/server/introspection/kvstore/collectionstats for the "KV Store: Instance" panel, it errors with  "The Rest request on the endpoint URI /services/server/introspection/kvstore/collectionstats?count=0 returned HTTP 'status not OK': code=500 internal server error'" I can hit the other /kvstore APIs just fine, .../kvstore/replicasetstats and .../kvstore/serverstats. If i run "| rest splunk_server=local /services/server/introspection/kvstore/collectionstats" on the new search head in a search bar I get the same error. This is version 8.2.0 and the kvstore is using the wiredTiger storage engine. Any suggestions?   TIA
Dear all, I have below table, FieldA (String) FieldB to FieldE (Numeric). Can we base on the FieldA value to change color for FieldB to FieldE? logic will  below 1. update cell for Field B... See more...
Dear all, I have below table, FieldA (String) FieldB to FieldE (Numeric). Can we base on the FieldA value to change color for FieldB to FieldE? logic will  below 1. update cell for Field B as red when (FieldA = "1111" or "3333" or "4444") and FieldB cell value >0 2. update cell for Field C as red when (FieldA = "2222" or "3333" ) and FieldC cell value >0 3. update cell for Field D as red when (FieldA = "2222" or "3333" ) and FieldD cell value >0 4. update cell for Field E as red when (FieldA = "4444" ) and FieldE cell value >0