All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Within connections I can only select driver MS-SQL server using MS generic driver. I am getting error com.microsoft.sqlserver.jdbc.SQLServerException: The value is not set for the parameter number 1 ... See more...
Within connections I can only select driver MS-SQL server using MS generic driver. I am getting error com.microsoft.sqlserver.jdbc.SQLServerException: The value is not set for the parameter number 1 I think it is a driver issue but cannot select another driver Does anyone have an idea how to fix this SELECT         [EPOEvents].[ReceivedUTC] as [timestamp],         [EPOEvents].[AutoID],         [EPOEvents].[ThreatName] as [signature],         [EPOEvents].[ThreatType] as [threat_type],         [EPOEvents].[ThreatEventID] as [signature_id],         [EPOEvents].[ThreatCategory] as [category],         [EPOEvents].[ThreatSeverity] as [severity_id],         [EPOEventFilterDesc].[Name] as [event_description],         [EPOEvents].[DetectedUTC] as [detected_timestamp],         [EPOEvents].[TargetFileName] as [file_name],         [EPOEvents].[AnalyzerDetectionMethod] as [detection_method],         [EPOEvents].[ThreatActionTaken] as [vendor_action],         CAST([EPOEvents].[ThreatHandled] as int) as [threat_handled],         [EPOEvents].[TargetUserName] as [logon_user],         [EPOComputerProperties].[UserName] as [user],         [EPOComputerProperties].[DomainName] as [dest_nt_domain],         [EPOEvents].[TargetHostName] as [dest_dns],         [EPOEvents].[TargetHostName] as [dest_nt_host],         [EPOComputerProperties].[IPHostName] as [fqdn],         [dest_ip] = ( convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOComputerProperties].[IPV4x] + 2147483648))),1,1)))+'.'+convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOComputerProperties].[IPV4x] + 2147483648))),2,1)))+'.'+convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOComputerProperties].[IPV4x] + 2147483648))),3,1)))+'.'+convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOComputerProperties].[IPV4x] + 2147483648))),4,1))) ),         [EPOComputerProperties].[SubnetMask] as [dest_netmask],         [EPOComputerProperties].[NetAddress] as [dest_mac],         [EPOComputerProperties].[OSType] as [os],         [EPOComputerProperties].[OSCsdVersion] as [sp],         [EPOComputerProperties].[OSVersion] as [os_version],         [EPOComputerProperties].[OSBuildNum] as [os_build],         [EPOComputerProperties].[TimeZone] as [timezone],         [EPOEvents].[SourceHostName] as [src_dns],         [src_ip] = ( convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOEvents].[SourceIPV4] + 2147483648))),1,1)))+'.'+convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOEvents].[SourceIPV4] + 2147483648))),2,1)))+'.'+convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOEvents].[SourceIPV4] + 2147483648))),3,1)))+'.'+convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOEvents].[SourceIPV4] + 2147483648))),4,1))) ),         [EPOEvents].[SourceMAC] as [src_mac],         [EPOEvents].[SourceProcessName] as [process],         [EPOEvents].[SourceURL] as [url],         [EPOEvents].[SourceUserName] as [source_logon_user],         [EPOComputerProperties].[IsPortable] as [is_laptop],         [EPOEvents].[AnalyzerName] as [product],         [EPOEvents].[AnalyzerVersion] as [product_version],         [EPOEvents].[AnalyzerEngineVersion] as [engine_version],         [EPOEvents].[AnalyzerDATVersion] as [dat_version],         [EPOProdPropsView_VIRUSCAN].[datver] as [vse_dat_version],         [EPOProdPropsView_VIRUSCAN].[enginever64] as [vse_engine64_version],         [EPOProdPropsView_VIRUSCAN].[enginever] as [vse_engine_version],         [EPOProdPropsView_VIRUSCAN].[hotfix] as [vse_hotfix],         [EPOProdPropsView_VIRUSCAN].[productversion] as [vse_product_version],         [EPOProdPropsView_VIRUSCAN].[servicepack] as [vse_sp] FROM [EPOEvents] LEFT JOIN [EPOLeafNodeMT] ON [EPOEvents].[AgentGUID] =  [EPOLeafNodeMT].[AgentGUID] LEFT JOIN [EPOProdPropsView_VIRUSCAN] ON [EPOLeafNodeMT].[AutoID] = [EPOProdPropsView_VIRUSCAN].[LeafNodeID] LEFT JOIN [EPOComputerProperties] ON [EPOLeafNodeMT].[AutoID] = [EPOComputerProperties].[ParentID] LEFT JOIN [EPOEventFilterDesc] ON [EPOEvents].[ThreatEventID] = [EPOEventFilterDesc].[EventId] AND ([EPOEventFilterDesc].[Language]='0409') WHERE [EPOEvents].[AutoID] > ? ORDER BY [EPOEvents].[AutoID] ASC    
i added a column chart.  the metric values display as 1.  i published event with values at a given time ex  time 1 value 4 time 2 value 3 time 3 value 5 the x axis appears to display the time pr... See more...
i added a column chart.  the metric values display as 1.  i published event with values at a given time ex  time 1 value 4 time 2 value 3 time 3 value 5 the x axis appears to display the time properly how do i get the axis to display the raw values from the events?
I get this error when i attempt to add a server to the Splunk Phantom App on Splunk Enterprise. I have added the phantom role to the admin role within Splunk Enterprise. I have disabled SSL verificat... See more...
I get this error when i attempt to add a server to the Splunk Phantom App on Splunk Enterprise. I have added the phantom role to the admin role within Splunk Enterprise. I have disabled SSL verification in case that was the issue. There is no network connectivity issues between the servers. But I am still getting this 400 error with no Text context.  I also created a new automation user on the phantom side and applied updated SSL certificates and not there are no SSL errors. Has anyone seen this issue yet? I have it hosted in AWS on EC2 instances sharing the same security groups. "There was an error adding the server configuration. On Phantom: Verify server's 'Allowed IPs' and authorization configuration. Status: 400 Text:"
Hello, I'm trying to create a search that grabs an authentication failure event followed by a an authentication success event from the same src. My current search looks like this: index=wineventl... See more...
Hello, I'm trying to create a search that grabs an authentication failure event followed by a an authentication success event from the same src. My current search looks like this: index=wineventlog sourcetype=wineventlog source=wineventlog:security EventCode=4625 src=host1 | stats values(dest) as dest by _time, src | eval event_id=start | search [| search index=wineventlog sourcetype=wineventlog source=wineventlog:security EventCode=4624 src=host1 | stats values(dest) as dest by _time, src | eval event_id=finish] | transaction src startswith=event_id=start endswith=event_id=finish maxspan=2m | stats values(dest) as dest by _time, src   Each individual search runs fine on it's own and finds events for host 1, and comparing the results of each search, I can see that the events occur within 2 minutes of each other.  However my transaction search fails to grab both events. Instead it only grabs the events from the first search, and fails to grab the events from the sub search. Am I missing something?
Hey everyone, above you can see an example of what I can expect in my work environment.. My goal is to modify the values from "tag_value" with the values of "tag_modifier"  eg. : "True" -> "is... See more...
Hey everyone, above you can see an example of what I can expect in my work environment.. My goal is to modify the values from "tag_value" with the values of "tag_modifier"  eg. : "True" -> "is : True"          or  "1611156456" -> "2021-01-20T16:27:36+0100"  The "tag_modifier" field is sourced from a lookup where I want to create a similar value for every "tag_type" to easily add and manage my tag_modifiers even with great amounts of tag_types Is there any way to do this?  ( case() macro is not an option because of the amount of tag_types) Thanks in advance for any help!
Hello, found this INFO in UFs internal logs, what does it mean? The forwarder is working fine and connected to DS. Management network port on the forwarder is disabled for security reason. UF 7... See more...
Hello, found this INFO in UFs internal logs, what does it mean? The forwarder is working fine and connected to DS. Management network port on the forwarder is disabled for security reason. UF 7.1.4 / Splunk Enterprise 7.3.4 Thanks.
Hello, How can I create a list of All clients reporting into a host, whether matched or unmatched? This should be a very simple task, but I sure cannot find a solution. Thanks, Terry
Hello, I used in my dashbord (hide in the flow map) or place of edit and I hid the link between the node and the database. How to get this link or how to activate it to see in the flow map the link... See more...
Hello, I used in my dashbord (hide in the flow map) or place of edit and I hid the link between the node and the database. How to get this link or how to activate it to see in the flow map the link between the node and the database. Thank you ^ Post edited by @Ryan.Paredez to improve the title. Please do your best to write a clear title. This makes posts more discoverable for the community
Hello everyone,   Spending a lot of time with set up of eStreamer for a Cisco ASA firewall.  Still receiving these errors via my heavy forwarder.  Any ideas, I have hit a wall! Thanks in advance, ... See more...
Hello everyone,   Spending a lot of time with set up of eStreamer for a Cisco ASA firewall.  Still receiving these errors via my heavy forwarder.  Any ideas, I have hit a wall! Thanks in advance,  
Hello everyone, I'm trying to configure deployment server and i have a error, occured on forwarder TCPOutAutoLB-1 Root Cause(s): More than 70% of forwarding destinations have failed. Ensure your... See more...
Hello everyone, I'm trying to configure deployment server and i have a error, occured on forwarder TCPOutAutoLB-1 Root Cause(s): More than 70% of forwarding destinations have failed. Ensure your hosts and ports in outputs.conf are correct. Also ensure that the indexers are all running, and that any SSL certificates being used for forwarding are correct. Last 50 related messages: 01-26-2021 14:55:56.254 +0300 WARN TcpOutputProc - Applying quarantine to ip=10.22.31.80 port=9997 _numberOfFailures=2 01-26-2021 14:55:56.253 +0300 INFO TcpOutputProc - Found currently active indexer. Connected to idx=10.22.22.241:9997, reuse=1. 01-26-2021 14:55:26.660 +0300 INFO TcpOutputProc - Connected to idx=10.22.22.241:9997, pset=0, reuse=0. 01-26-2021 14:55:26.553 +0300 INFO TcpOutputProc - _isHttpOutConfigured=NOT_CONFIGURED 01-26-2021 14:55:26.322 +0300 INFO TcpOutputProc - Group receiver initialized with maxQueueSize=512000 in bytes. What does it mean? Thank you.
Firstly my indexer cluster consists of 2 Indexers (with a 6TB volume on each) and a Cluster Master to manage them.  For the most part CPU and Memory is where you expect it to be (CPU anywhere between... See more...
Firstly my indexer cluster consists of 2 Indexers (with a 6TB volume on each) and a Cluster Master to manage them.  For the most part CPU and Memory is where you expect it to be (CPU anywhere between 20-40% and memory around the same).  This is with 83 sources averaging around 150-200GB a day.   We have automated RHEL OS patching that occurs on a regular schedule and obviously this means that the environment is not in maintenance mode.  When the patching occurs, the indexers are patched at separate times to each other (for example 1 indexer will patch an hour after the last 1 is restarted).  Consequently after the 2nd one has been patched, I see that my indexers run hot (around 90%+) in CPU and running at RHEL cut out limit (28GB of 32GB) where RHEL protects the OS and kills the SPLUNKD service which then restarts.   This goes on for something like 6-8 hours before things settle back down to the normal 20-40% utilization until the next patching cycle.  Thankfully this doesn't impact us too greatly as everything eventually balances and things settle down and SPLUNK just keeps on working along (searches a little slower obviously) We are currently running on the 7.2 stream and would like to know what's the best way to reduce this high throughput and reduce how long it takes for the CPU/Memory to balance.  Would setting maintenance schedule before patching and then removing after or the next day reduce this?  I also noted that when an app is installed to the Indexers (so cluster deployment) that this also tends to cause the high CPU/Memory spike for almost as long (closer to 3-4 hours).  Currently I don't have scope to increase what resources I do have.  
Hi, I am trying to extract "Sync_State" from the below logs types: log1:  Synchronization : In Sync log2: Synchronization : Out of Sync I created the rex command "(?ms)Synchronization\s:\s(?<Sync... See more...
Hi, I am trying to extract "Sync_State" from the below logs types: log1:  Synchronization : In Sync log2: Synchronization : Out of Sync I created the rex command "(?ms)Synchronization\s:\s(?<Sync_State>\w+\s\w+)". Using this I am getting "In Sync" in the right way but for "Out of Sync" I am getting only "Out of".  Please help me create a rex command to extract the field values in the desired way.   Thank you.
Hi everyone I have a lookupfile that contains a name and an ID   Brokers.csv Name ID Broker1 101 Broker2 102 Broker3 103 Broker4 201 Broker5 202 Broker6 203   I ru... See more...
Hi everyone I have a lookupfile that contains a name and an ID   Brokers.csv Name ID Broker1 101 Broker2 102 Broker3 103 Broker4 201 Broker5 202 Broker6 203   I run this search query on my data.   index=SQL | fields BrokerID host | convert timeformat="%Y-%m-%d" ctime(_time) AS date | stats values(BrokerID) by date   and this is my results date BrokerID 2020-12-27 101   102 2020-12-27 201   202   203 2020-12-28 101 2020-12-29 101   102   103 2020-12-29 201   202   203   So What query I should run to get following result? 2020-12-27 Broker3 2020-12-28 Broker2   Broker3   Broker4   Broker5   Broker6   Thanks in advance.
How do I display pie chart on a Choropleth Map just as in picture? below lin is from splunk election dashboard. I am sure they have used js, css or something additionally but nore sure exactly what, ... See more...
How do I display pie chart on a Choropleth Map just as in picture? below lin is from splunk election dashboard. I am sure they have used js, css or something additionally but nore sure exactly what, Can someone guide me how do I accieve this in splunk? https://election2020.splunkforgood.com/elections.   Thanks in Advance!
Hi Everyone, I have one requirement. I have created one Error message Alert as below: index=abc ns=xyz CASE(ERROR)|rex field=_raw "ERROR(?<Error_Message>.*)" |eval _time = strftime(_time,"%Y-%m-%d... See more...
Hi Everyone, I have one requirement. I have created one Error message Alert as below: index=abc ns=xyz CASE(ERROR)|rex field=_raw "ERROR(?<Error_Message>.*)" |eval _time = strftime(_time,"%Y-%m-%d %H:%M:%S.%3N")| cluster showcount=t t=0.3|table app_name, Error_Message ,cluster_count,_time, environment, pod_name,ns |dedup Error_Message| rename app_name as APP_NAME, _time as Time, environment as Environment, pod_name as Pod_Name,cluster_count as Count But from this I am duplicate alert. Entire Data is coming same except count. Can someone guide me what should I remove from query.
Good day, I have been trying to figure out how to accomplish the following task for a few days now and thought I would ask the community for ideas. I have got events coming into Splunk that have go... See more...
Good day, I have been trying to figure out how to accomplish the following task for a few days now and thought I would ask the community for ideas. I have got events coming into Splunk that have got a service start and service end date like the example provided below. ServiceStartDate="2021-01-26", ServiceEndDate="2021-03-31" I have been trying to figure out how I can filter based on the ServiceEndDate. I want to be able to select either a date range or just a specific date. This should then produce all events with a ServiceEndDate within that range or specific date selected. The search I have been testing is the following: index="my_index" sourcetype="my_sourcetype" source="my_source" | eval _time=strptime(ServiceEndDate,"%Y-%m-%d") | sort limit=0 - _time | addinfo | where _time>=info_min_time AND (_time<=info_max_time OR info_max_time="+Infinity") This allows me to use the time picker to filter on ServiceEndDate, but does not really produce all the results I ask for. For example, I would choose a date range from 01/20/2020 to 12/20/2021 The search won't produce all events for that range unfortunately. I know that there is indeed events with a ServiceEndDate in that range that is not displayed because if I select "All time" in the time picker I can see them. The amount of events that it should return does not exceed 10,000 but I put the limit=0 in there just in case. The end goal will be to put this into a dashboard so I can produce the filtered events in a table. Any ideas would be greatly appreciated.
Goodmorning guys much help needed. I have been receiving a lot of phishing attempts to recipients emails. I am looking for the best query that can allow me to see if these emails were filtered as spa... See more...
Goodmorning guys much help needed. I have been receiving a lot of phishing attempts to recipients emails. I am looking for the best query that can allow me to see if these emails were filtered as spam or quarantined.  Thanks I have been using query but it doesn't give me if the sender's email is filtered as spam or quarantined Sometimes it does not even work. sourcetype=0365:management:activity"sender.email@xx.com" AND "recepient.email@hhs.gov"| table sourcetype_time P2Sender recipients{} subject| sort Recepients{}| dedup Recipients {}   
Hi team,  I have a stats requirement to get he user retention rate that visit a module per month in last 1 year. Detail Requirement to stats: step1. find the distinct users that visit a module in ... See more...
Hi team,  I have a stats requirement to get he user retention rate that visit a module per month in last 1 year. Detail Requirement to stats: step1. find the distinct users that visit a module in 2020, January step2. go to 2020, February and find  the number of users in  step1 visit the module again step3. go to 2020, March and find the number of  users in step2 visit  the module again step4. go to 2020, April and find .... ..... 2020, December and find ... Log sample in splunk and how can I write the query to get the expected user retention rate? 2021-01-19 06:00:38,668 PLV=REQ CIP=0.0.0.0 CMID=test CMN="testCompany" UID=testUser AGN="[Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Safari/537.36]"  module=SUCCESSION 
i have to replace multiple text strings with different values. e.g. Log Statement:-  "Hello, this is sample url for employees, /company/employee/1 and there is also a sample url for departments lik... See more...
i have to replace multiple text strings with different values. e.g. Log Statement:-  "Hello, this is sample url for employees, /company/employee/1 and there is also a sample url for departments like /company/name/department/a1" I have to replace any url's of the format "/company/employee/*" with /company/employee{id} , and url's of format "/company/name/department/*"  with  "/company/name/department/{deptId}" and url's for format "/company/notfound/404" with "/company/notfound/{status}  and so on.. so output will look like below. "Hello, this is sample url for employees, /company/employee/{id} and there is also a sample url for departments like /company/name/department/a1 or /company/name/department/{deptId}" Currently i am using 3 nested replace statements for above example to solve this issue because i don't know how many patterns would exist in the statement so i check all 3 patterns. Nested replace seems like slow and also giving errors like below. has exceeded configured match_limit, consider raising the value in limits.conf. Also my nested replace statements are increasing as i am adding more url formats. this is exactly how i am forming the regex.   | eval apiPath = replace (replace(replace(replace(replace(replace(replace(replace(replace(replace( trimmedUrl, "\/[0-9]{1}[0-9a-z]+\/", "/{id}/" ), "\/[a-z]{1,2}[0-9]{1}[0-9a-z]+\/", "/{id}/" ), "\?.*", "?<my-filters>" ), "\/P-[0-9]+", "/P-{id}" ), "\/car\/[a-z]+\/.*", "/car/{carType}/{id}" ), "\b[0-9a-f]{8}\b-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-\b[0-9a-f]{12}\b", "{uuid}" ), "\/user\/[a-z0-9+\._-]+(@|%2540)[a-z\.-]+\.[a-z]+$", "/user/{email}" ), "\/(jobs|xls)\/[a-z0-9]+", "/\1/{id}" ), "\/org\/.*\/enroll\/status", "/org/{id}/enroll/status" ), "\/car\/[a-zA-Z]+\/[0-9]+$", "/car/{carType}/{id}" ) i want to remove the error and execute the search faster. please suggest couple of different options. thank you
I am trying to find the top api url's that were consumed by our clients. Our uri in logs are of below format. 1. https://api.server.com/user/1                       (type 1) 2. https://api.server.c... See more...
I am trying to find the top api url's that were consumed by our clients. Our uri in logs are of below format. 1. https://api.server.com/user/1                       (type 1) 2. https://api.server.com/user/2                       (type 1) 3. https://api.server.com/user/3/role/1        (type 2) 4 . https://api.server.com/user?name=test  (type 3) Now i need result count for each uri like below. /user/{id} - 2 /user/{id}/role{id}. - 1 /user   - 1  (or user {filter} for query params e.g. /user/{filter}) i have unlimited combinations of such path parameters for different resources. what is the best way to get the count of url hits.