All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

how to write query via POST/search using  splunk restAPI  in C#
Hi all, I want to analyze the Round Trip Time and received count in Ping command for each ping packet size or for all packets. Therefore, I use stats() as below:   <my basic search> `comment("gen... See more...
Hi all, I want to analyze the Round Trip Time and received count in Ping command for each ping packet size or for all packets. Therefore, I use stats() as below:   <my basic search> `comment("generate ping_rtt_time for round trip time, ping_rcv_count for received packet count")` | stats min(ping_*) as min_ping_*, max(ping_*) as max_ping_*, avg(ping_*) as avg_ping_*, perc20(ping_ping_*) as pr20_ping_*, perc40(ping_*) as pr40_ping_*, stdev(ping_*) as stdev_ping_* by ping_packet_size     Therefore if the user selects multi-ple packet size, ex, 40, 128 bytes, the related analysis can be provided. But if a user wants to read analysis for all packets, that means he want to analyze all  packet size, ex, All, I can't use the same stats(). If there are 2 kinds of packet size, ex. 40 , 128 bytes, it is different between selecting 40 and 128 options in a scroll down bar with selecting "All" in the same scroll down bar. Does anyone know how to analyze for one or multi-ple packet size and for all kinds of packet size as well ? Thank you.
Hello, How to join data from index and dbxquery without using JOIN, APPEND or stats command? Issue with JOIN:  limit of subsearch 50,000 rows or fewer.  Missing data. Issue with APPEND: requires "... See more...
Hello, How to join data from index and dbxquery without using JOIN, APPEND or stats command? Issue with JOIN:  limit of subsearch 50,000 rows or fewer.  Missing data. Issue with APPEND: requires "stats values" command to correlate the data, gives "merged data" in one row that needs to be split (using MVexpand or other methodology). MVexpand has memory issue and slow. At this time, my solution is moving DBXquery data into CSV file and use "lookup" command to join the data from index, but the CSV file is static and needs to be manually updated.    Please suggest. Thank you | index=vulnerability_index | table ip_address, vulnerability, score Table 1: ip_address vulnerability score 192.168.1.1 SQL Injection 9 192.168.1.1 OpenSSL 7 192.168.1.2 Cross Site-Scripting 8 192.168.1.2 DNS 5   | dbxquery query="select * from tableCompany" Table 2: ip_address company location 192.168.1.1 Comp-A Loc-A 192.168.1.2 Comp-B Loc-B 192.168.1.5 Comp-E Loc-E Expected result after join data from index(Table 1) and index (Table 2) Table 3:  ip_address company location vulnerability Score 192.168.1.1 Comp-A Loc-A SQL Injection 9 192.168.1.1 Comp-A Loc-A OpenSSL 7 192.168.1.2 Comp-B Loc-B Cross Site-Scripting 8 192.168.1.2 Comp-B Loc-B DNS 5
Hi, Working on splunk capacity planning 1. splunk is in cloud 2. 100GB of data 3. 10 users 4. No use of splunk enterprise security How much license would be required? No. of indexers?  s... See more...
Hi, Working on splunk capacity planning 1. splunk is in cloud 2. 100GB of data 3. 10 users 4. No use of splunk enterprise security How much license would be required? No. of indexers?  search heads?  Is there any site where I can get the exact details?  
I'm confused on some of the differences between Cloud and Enterprise. Sometimes the documentation on Cloud does not go far enough to define those differences and one of them is the for Deletion of Ev... See more...
I'm confused on some of the differences between Cloud and Enterprise. Sometimes the documentation on Cloud does not go far enough to define those differences and one of them is the for Deletion of Events/Indexes. If I use the Splunk UI Web and delete an index is it "marked" as deleted like Enterprise where it is just hidden from Search or is it physically deleted on Cloud? Also if I use the sourcetype=wantedsource | delete approach on the search head, same question.   
Hi,  I am setting this up for the first time and have questions regarding how to configure it.  I have function apps that are using app insights and a shared log analytics workspace.   Do I need to ... See more...
Hi,  I am setting this up for the first time and have questions regarding how to configure it.  I have function apps that are using app insights and a shared log analytics workspace.   Do I need to configure a diagnostic setting on the individual function app to use the event hub to stream the logs to Splunk or do I setup a diagnostic setting on the log analytics workspace that will send all the logs it receives to Splunk?  Additionally, if there is any documentation on how to configure this it would be very much appreciated.  Thanks! Ethan
Hello, I'm still in the learning process of Splunk searches and I have been tasked to create a table that contains only open transactions based off of "where closed_txn=0".  But also join a Service ... See more...
Hello, I'm still in the learning process of Splunk searches and I have been tasked to create a table that contains only open transactions based off of "where closed_txn=0".  But also join a Service Now Incident # to the each row in the table.   I've been bumbling around, testing and failing on this one.  I've got it to a point where the table is only showing the open transactions, but it is being duplicated for every incident # for ServiceNow.  Below is the Search I am using, I've probably did this all wrong   integrationName="Opsgenie Edge Connector - Splunk", alert.message = "STORE*", alert.message != "*Latency" alert.message != "*Loss" action != "AddNote" | transaction "alert.id", alert.message startswith=Create endswith=Close keepevicted=true | table _time, alert.updatedAt, alert.message, alert.alias, alert.id, action, "alertDetails.Alert Details URL", _raw, closed_txn, _time | where closed_txn=0 | rename alert.message AS "Branch" | rename "alertDetails.Alert Details URL" as "Source Link" | eval Created=strftime(_time,"%m-%d-%Y %H:%M:%S") | fields Created, Branch, "Source Link" | sort by Created DESC | fields - _raw, _time | join s max=0 [ search (integrationName="Opsgenie Edge Connector - Splunk" alert.message = "STORE*") OR (sourcetype="snow:incident" dv_opened_by=OPSGenieIntegration) | eval joiner=if(integrationName="Opsgenie Edge Connector - Splunk", alertAlias, x_86994_opsgenie_alert_alias) | stats values(*) as * by joiner | where alertAlias=x_86994_opsgenie_alert_alias | rename dv_number as Incident | table alertAlias, Incident | fields alertAlias, Incident ] | table Created, Branch, "Source Link", Incident   Thanks for any help on this one, much appreciated. Tom  
I am a student at Embry-Riddle Aeronautical University and i am attending MISA 532 Intgd Threat Warning Attk EIS. Our semester project is to create a dashboard using Splunk and adding panels each wee... See more...
I am a student at Embry-Riddle Aeronautical University and i am attending MISA 532 Intgd Threat Warning Attk EIS. Our semester project is to create a dashboard using Splunk and adding panels each week. I am requesting assistance because i have been able to download Splunk successfully but have not been able to use Splunk to create dashboards. I am asking if someone can assist me in dashboard creations to be able to fulfill my class requirements.  I am tasked to create three panels; Access Denied/Privilege Escalation. how many failed attempts or PE were recorded. Failed Log in. How many failed login attempts were detected by company users. Social Media (OSINT). A dashboard showing OSINT information for employees. 
Hi  , I have my log entries line below: 2023-08-22T10:48:01.340641-07:00 ARC1 (PID:63766948): Archived Log entry 176651 added for T-1.S-31459 ID 0xffffffffadc86430 host = alert1.corp so... See more...
Hi  , I have my log entries line below: 2023-08-22T10:48:01.340641-07:00 ARC1 (PID:63766948): Archived Log entry 176651 added for T-1.S-31459 ID 0xffffffffadc86430 host = alert1.corp source = /oracle/diag/rdbms/testprd/TESTPRD1/trace/alert_TESTPRD1.log sourcetype =alert-logs-oracle I want to filter the db name from this data which is "testprd" (4th field of Source) and hostname . Can someone please help me how to use delimiter to filter the db name from source??
Hi Team, We are trying to extract the hostname from the logs . but unable to get the exact output ( we need hostname as sample-987) . Please find the logs and tried command. Please assist us on hi... See more...
Hi Team, We are trying to extract the hostname from the logs . but unable to get the exact output ( we need hostname as sample-987) . Please find the logs and tried command. Please assist us on high priority. Thanks logs:-  Symptom: type DD Alert Sample-987: CRITICAL: DiskFailure: HardwareFailure Our Command:- | rex field=_raw "DD\s\Alert\s(?<HostName>\w+-\d+)" Regards, Lakshmi 
Hi, Wanted to know if its possible to create new  indexer cluster and add it as peers to the existing cluster to copy all the old buckets on new set of indexes and segregate it once all the old data... See more...
Hi, Wanted to know if its possible to create new  indexer cluster and add it as peers to the existing cluster to copy all the old buckets on new set of indexes and segregate it once all the old data is available on new one ?
Hello,   sourcetype=reactorjob index=syslog | rex field=_raw "\[(?<cycle_name>[^\]]+)\]" | rex field=_raw "\[(?<duration_ms>[^\]]+)ms\]" | rex field=_raw max_match=0 "(?<step>\d+): (?<duratio... See more...
Hello,   sourcetype=reactorjob index=syslog | rex field=_raw "\[(?<cycle_name>[^\]]+)\]" | rex field=_raw "\[(?<duration_ms>[^\]]+)ms\]" | rex field=_raw max_match=0 "(?<step>\d+): (?<duration>\d+)" | stats avg(duration_ms) by cycle_name   It creates simple column chart where I see how long one cycle was running. But I also have something like that in data, where first number before ":" is step and after ":" is time how long that step was running.   run time: 1: 55 2: 22 3: 17 4: 14 5: 5 6: 14 7: 30 9: 5889 10: 6 11: 2986 12: 17   If you combine <duration> of all steps you will get the same value as <duration_ms> So I would like to split column of one <cycle_name> by <duration> in vizualization   Is it possible? Thank you 
Does Splunk SOAR has use case definition template like the XSOAR has it's own use case definition template.
Hello guys is it possible to start to monitor metrics for the host where we are collecting logs in Splunk ES? Thank you f_f  
Hi all, i count the number of ssl-login-fail for each hour. index... host... action="ssl-login-fail" | timechart span=1h count(eval(action="ssl-login-fail")) as result It's interesting but i woul... See more...
Hi all, i count the number of ssl-login-fail for each hour. index... host... action="ssl-login-fail" | timechart span=1h count(eval(action="ssl-login-fail")) as result It's interesting but i would like to have change's rate in order to make alert if this rate change too much (>50% maybe). I tested lot of thing but i'm lost... Thanks a lot.
Hi I have the following query for training a model. However, I want to save my model name using a single column value that comes from a lookup. In a nutshell, I want to save the model name dynamicall... See more...
Hi I have the following query for training a model. However, I want to save my model name using a single column value that comes from a lookup. In a nutshell, I want to save the model name dynamically. index = "Abc"  fields Days, Count, Target | sample partitions=100 | appendpipe [ | search partition_number < 90 | fields - partition_number | fit DecisionTreeRegressor "target" from * splitter=best into "model_name" apply=false ] So currently the model name is "model_name" but I want it to come from a lookup, where there is single column and a single value.  @niketn  @gcusello 
Hello all, We are sending some JSON files using HEC (raw endpoint), where a file contains some metadata at the beginning (see below). We want this metadata to be present in ALL events of said file.... See more...
Hello all, We are sending some JSON files using HEC (raw endpoint), where a file contains some metadata at the beginning (see below). We want this metadata to be present in ALL events of said file. Basically, we want to prevent having common data repeated in each event in the JSON. We already tried creating a regex that extracts some fields, but it will add those fields on one event only, not on all. The JSONs looks like this:     { "metadata": { "job_id": "11234", "project": "Platform", "variant": "default", "date": "26.06.2023" }, "data": { "ID": "1", "type": "unittest", "status": "SUCCESS", "identified": 123 }, { "ID": "2", "type": "unittest", "status": "FAILED", "identified": 500 }, { "ID": "3", "type": "unittest", "status": "SUCCESS", "identified": 560 } }       We want to "inject" the metadata attributes into each event, so we expect to have a table like this: job_id project variant date ID type status identified 11234 Platform default 26.06.2023 1 unittest SUCCESS 123 11234 Platform default 26.06.2023 2 unittest FAILED 500 11234 Platform default 26.06.2023 3 unittest SUCCESS 560                            Metadata                                                                Data   Currently we use this configuration in props.conf:     [sepcial_sourcetype] BREAK_ONLY_BEFORE_DATE = DATETIME_CONFIG = LINE_BREAKER = ((?<!"),|[\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = Custom description = Jenkins Job Configurations pulldown_type = 1 disabled = false SEDCMD-removeunwanted1 = s/\{\s*?"metadata(.*\s*)+?}//g SEDCMD-remove_prefix = s/"data":\s*\[//g SEDCMD-remove_suffix = s/\]\s*}//g       What should our props.conf and transforms.conf look like to accomplish this? Even if this splits the events and extracts the fields correctly, it obviously causes the metadata part to be ignored (due to SEDCMD-removeunwanted1). But even without that configuration, the metadata will only be present in its own separate event and not replicated on all events. Here we saw that it is also not supported to send custom metadata, but that would have been perfect for our use case: https://community.splunk.com/t5/Getting-Data-In/Does-the-HTTP-Event-Collector-API-support-events-with-arbitrary/m-p/216092 We already have a workaround where we will edit the JSON so that each event contains the metadata, but this is not ideal as will require to preprocess it before sending to Splunk and all events would have repeated data. So we are looking for a solution that could be handled by Splunk directly. Thanks for any hints!
Hello Team, Is there a limit to the number of urls that can be monitored on a Server?
Hi All, I have got a query like below to get the count of different "Actual_Status" values in below tabular format:   ... | rex field=_raw "\<tr\>\s+\<td\s\>(?P<Domain>[^\<]+)\<\/td\>" | rex fi... See more...
Hi All, I have got a query like below to get the count of different "Actual_Status" values in below tabular format:   ... | rex field=_raw "\<tr\>\s+\<td\s\>(?P<Domain>[^\<]+)\<\/td\>" | rex field=_raw "\<tr\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>(?P<App_Name>[^\<]+)\<\/td\>" | rex field=_raw "\<tr\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>(?P<Machine>[^\<]+)\<\/td\>" | rex field=_raw "\<tr\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>(?P<Type>[^\<]+)\<\/td\>" | rex field=_raw "\<tr\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>(?P<Instance>[^\<]+)\<\/td\>" | rex field=_raw "\<tr\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\"\w+\"\>(?P<Actual_Status>[^\<]+)\<\/\w+\>\<\/b\>\<\/td\>" | rex field=_raw "\<tr\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\"\w+\"\>[^\<]+\<\/\w+\>\<\/b\>\<\/td\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\"\w+\"\>(?P<Expected_Status>[^\<]+)\<\/\w+\>\<\/\>\<\/td\>" | dedup App_Name,Machine,Type,Instance,Actual_Status,Expected_Status | stats count as Total by Actual_Status   Actual_Status Total HAWK 207 RUNNING 46 RUNNING-OOS 91 STOPPED 415 I am using these 7 values of "Actual_Status" for our dashboard (RUNNING,RUNNING-OOS,STOPPED,HAWK,STOPPING, STANDBY & ERROR) and I want to put all the fieldvalues in the table even if its count is zero. Like in below table so that I can show the visualization in "Column chart" view in a more meaningful way: Actual_Status Total HAWK 207 RUNNING 46 RUNNING-OOS 91 STOPPED 415 STANDBY 0 STOPPING 0 ERROR 0 Requesting you all to help me modify the query to get the expected table and dashboard visualization. Your kind inputs are highly appreciated..!! Thank You..!!
Hello, We were running v9.0.5 and I upgraded the master to v9.1.0.2.  This is on CentOS 7.  The upgrade ran successfully, rebooted, and splunk started with all ports opened: Checking prerequisite... See more...
Hello, We were running v9.0.5 and I upgraded the master to v9.1.0.2.  This is on CentOS 7.  The upgrade ran successfully, rebooted, and splunk started with all ports opened: Checking prerequisites... Checking http port [8000]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open These are running for splunk status: splunkd is running (PID: 4730). splunk helpers are running (PIDs: 4784 4828). Inside the instance, I can ping localhost.  But when I ping localhost 8000; it hung for a long time. Also, from my indexer instance; I can ping the master ip but not with port 8000 and 8089. I checked security groups and such.  Ports are allowed.  Nothing changed.  The UI was running before I started the upgrade. Anyone ran into this issue with the upgrade?  Thanks!