All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunk community,  For this dataset :  Time Agent Number of calls taken 11:00 AM John 1 11:00 AM Kate 0 11:00 AM Eric 1 10:00 AM John 2 10:00 AM Kate 1 10:00 ... See more...
Hello Splunk community,  For this dataset :  Time Agent Number of calls taken 11:00 AM John 1 11:00 AM Kate 0 11:00 AM Eric 1 10:00 AM John 2 10:00 AM Kate 1 10:00 AM Eric 0 9:00 AM John 0 9:00 AM Kate 1 9:00 AM Eric 1 8:00 AM John 3 8:00 AM Kate 1 8:00 AM Eric 2 7:00 AM John 3 7:00 AM Kate 5 7:00 AM Eric 2 6:00 AM John 2 6:00 AM Kate 3 6:00 AM Eric 0 Is it possible to get a moving average for each agent along with the moving average for the total amount of calls in one specific hour  and to place this all into a time chart?  this is the Splunk query I'm currently using : | union [| search <insert index here> AGENT=* | bin _time span=1h | stats count BY _time | trendline wma2(count) AS AverageNumberoftotalcallsperhour |table _time AverageNumberoftotalcallsperhour ] [| search <insert index here> Agent=Kate| bin _time span=1h | stats count BY _time | trendline wma2count) AS AvgKate |table _time AvgKate ] [| search<insert index here> Agent=John| bin _time span=1h | stats count BY _time | trendline wma2(count) AS AverageNumberOfCallsPerHourbyJohn |table _time AverageNumberOfCallsPerHourbyJohn ] [| search<insert index here> Agent=Eric| bin _time span=1h | stats count BY _time | trendline wma2(count) AS AvgEric |table _time AvgEric ] However, when trying to run the splunk query,  the output isn't correct :  _time AverageNumberoftotalcallsperhour AvgKate AverageNumberOfCallsPerHourbyJohn AvgEric     6:00 AM 2           7:00 PM 2           8:00 AM   3         9:00 AM   3         10:00 AM     4       11:00 AM     4       Noon       5                    
Hi, We are in the process of migrating all Apps/Config's from an older standalone instance(7.2.4.2) to a newer SHC(8.1.1). A datamodel was also migrated along with the App and appears to be working ... See more...
Hi, We are in the process of migrating all Apps/Config's from an older standalone instance(7.2.4.2) to a newer SHC(8.1.1). A datamodel was also migrated along with the App and appears to be working fine in terms of acceleration statistics. But when I try to access using tstats, format that worked previously returns nothing. | tstats summariesonly=t count FROM datamodel="modelname.dataset" by dataset.field But if I do not mention dataset in the FROM cause, it works just fine. | tstats summariesonly=t count FROM datamodel="modelname" by dataset.field   Could I have missed something during the migration? What could be causing the previous command to not work.
Hello, I'm trying to setup Splunk in a lab environment. I've got one windows client which I want to send logs over to my Splunk server via a UF. I am managing the endpoint's splunk config via a dep... See more...
Hello, I'm trying to setup Splunk in a lab environment. I've got one windows client which I want to send logs over to my Splunk server via a UF. I am managing the endpoint's splunk config via a deployment server. This works fine, the client checks in, my apps get pushed to it, all fine. For windows logs, I'm using the Splunk TA for Windows (https://splunkbase.splunk.com/app/742/#/overview) with an inputs.conf as below   [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 renderXml=true evt_resolve_ad_obj = 1 index = windows [WinEventLog://System] disabled = 0 renderXml=true evt_resolve_ad_obj = 1 index = windows [WinEventLog://Application] disabled = 0 renderXml=true evt_resolve_ad_obj = 1 index = windows     The app gets deployed correctly and I see the above inputs.conf in the %SPLUNK_HOME%/apps/Splunk_TA_windows/local/inputs.conf. However, in Splunk, I don't seem to be getting all the logs. In fact, I'm only getting event id 6xxx logs and very few (43 events/15mins)   I can't figure out why all the logs aren't coming in but only a few irrelevant ones. Any help will be much appreciated. Thank you!
SUPPOSE there is a panel to display database symbol..Now bottom of tht symbol i waant to display two value which is in a single tag...the value is dynamic. <panel><single></single><single></single></... See more...
SUPPOSE there is a panel to display database symbol..Now bottom of tht symbol i waant to display two value which is in a single tag...the value is dynamic. <panel><single></single><single></single></panel>   I used styling for single tag value it is displaying top and bottom but i need it side by side
Hi, I get the exactly same count for avg and peak, any issue with my query?   index=a sourcetype=ab earliest=-30d latest=now | bucket _time span=1mon | stats count by _time | eval date_month=str... See more...
Hi, I get the exactly same count for avg and peak, any issue with my query?   index=a sourcetype=ab earliest=-30d latest=now | bucket _time span=1mon | stats count by _time | eval date_month=strftime(_time, "%b") | eval date_day=strftime(_time, "%a") | stats avg(count) as AverageCountPerDay max(count) AS Peak_Per_Month by date_month, date_day   date_month date_day AverageCountPerDay Peak_Per_Month Aug Sun 82037650 82037650 Jul Thu 4621995 4621995
I have no idea what I need to do here (if anything), and the guy who has dealt with getting data in previously is on holiday for a while so any advice is much appreciated We upgraded our Palo Alto f... See more...
I have no idea what I need to do here (if anything), and the guy who has dealt with getting data in previously is on holiday for a while so any advice is much appreciated We upgraded our Palo Alto firewall to a newer version which has moved the VPN logs from the system category to a separate one for GlobalProtect (more info here ) When I noticed we weren’t receiving the VPN logs anymore, we got the firewall guys to forward the new log category to us and our Splunk guy assured me that we wouldn’t need to do anything else However, the logs are supposedly being forwarded to us now but Splunk isn’t showing them, at least not in the index we have for the Palo Alto logs. Is our Splunk guy wrong and we do actually have to manually set up the new sourcetype? Or have the firewall guys messed up (harder to check due to language barriers and time differences) I am pretty clueless about this so apologies if this is a silly question
Hey all, I am in the process of migrating from a Windows Heavy Forwarder to a Linux Heavy Forwarder for Splunk Cloud. Part of this exercise involves migrating the Splunk DB Connect App from the Wind... See more...
Hey all, I am in the process of migrating from a Windows Heavy Forwarder to a Linux Heavy Forwarder for Splunk Cloud. Part of this exercise involves migrating the Splunk DB Connect App from the Windows HF Box to the new Red Hat 8.4 HF box. Quick Details: Splunk DBConnect 3.6.0 DBConnect App Host Red Hat Linux 8.4 ga OpenJDK(Coretto) 11.0.2 LTS MS SQL Server Windows Server 2016 Standard (1607) MS SQL 2014 (12.0.4522.0) Registry is modified to allow TLS 1.1 and 1.2 under Schannel (via IISCrypto Tool) Registry is also modified to allow strong SChannel .Net configuration. This was mentioned to be necessary for SQL Server 2019, but I figured it might apply to 2014 as well (https://learn.mediasite.com/course/enabling-tls-1-2/lessons/sql-server-configuration/) I basically duplicated the configuration from the original Windows Server that ran DBConnect App. I brought over the same connection information as well as the same identity information. I've validated that the identity information is correct. I am getting the following error:     Database connection server.domain.com is invalid. The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: "Certificates do not conform to algorithm constraints".     This seems to imply that there is some sort of certificate negotiation error. I have browsed through the DBConnect documentation but nothing inside there seems to help. I noticed a few different keystores around the db connect app and I tried messing with a few of them, but without luck. Currently in the "keystore" folder, I have loaded a domain PKI cut/issued cert/key pair and the domain PKI CA chain. None of those seem to make any difference. My basic connection string looks like the following in the edit url box:     jdbc:sqlserver://server.domain.com:1433;databaseName=Splunk;selectMethod=cursor;encrypt=true     I've tried various differentiations of this as well like:     jdbc:sqlserver://server.domain.com:1433;databaseName=Splunk;selectMethod=cursor;encrypt=true;trustStore=/opt/splunk/etc/apps/splunk_app_db_connect/keystore/default.jks;trustStorePassword=password     I wasn't sure how to configure the JRE installation path and I also wasn't too positive on where it was located in the Red Hat 8.4 instance. I did some tracking and I think it's loaded here:     /usr/lib/jvm/java-11-openjdk-11.0.12.0.7-0.el8_4.x86_64/     I mainly did that because it appears like JAVA_HOME wasn't set in the OS. I could have set it but I figured it would take out any potentially issues if I just pointed it directly at the folder. I haven't had much luck. I loaded up wireshark and confirmed I could see the connection and I do see the inbound 1433 connection from the heavy forwarder. I do see the active connection being made, but because it's over 1433 Wireshark isn't showing any TLS negotiation. I am not sure if that's an issue or not. I am not sure where else to go from here. Does anyone have any thoughts?
I started exploring the new studio and created some dashboards. Cool but new so not much community wisdom and examples. Only the documentation . Is there a dedicated forum for the Studio?  The label... See more...
I started exploring the new studio and created some dashboards. Cool but new so not much community wisdom and examples. Only the documentation . Is there a dedicated forum for the Studio?  The label "studio" is not available here.  My question: is there a way for a custom color in a table in the new Studio?  Similar to the custom color codein the simple XML:  <format type="color" field="appid_tag">           <colorPalette type="expression">case(value="Banana","#a740a2")</colorPalette>         </format>   Next question up: can I use conditional color in Studio, similar to the one in simple xml:   <format type="color" field="tags">           <colorPalette type="expression">case(match(lower(value), "`"),"#b6fcd5")</colorPalette>         </format>    
Hi , A newbie to Splunk here. I have found the query for  login info for users on a host:  index=os  source=var/log/secure  host=myhost process=sshd  I want to trigger an alert if a user  who has ... See more...
Hi , A newbie to Splunk here. I have found the query for  login info for users on a host:  index=os  source=var/log/secure  host=myhost process=sshd  I want to trigger an alert if a user  who has logged in before,  logs in to the host after more than 90 days. Could someone please help me how to write a query .  So the user should not have logged in for more than 90 days on the host.  Thank you  
hi  i  have obtained a table like this code                       status                             count 1                        27 aug 2021success                45 1                     27 au... See more...
hi  i  have obtained a table like this code                       status                             count 1                        27 aug 2021success                45 1                     27 aug  2021 failure                   0   i  want  a format  like  this   code   27  aug 2021 success         27 aug 2021 failure 1             45                                                  0
I will have POC with customer. Is it possible to have single site / multi site cluster on Splunk Enterprise Trial license?
Dear AppD Team, I have a few services written in .NET Core running on Linux containers. I'm using the latest .NET agent for Linux. Among these services are a Kafka producer and a Kafka consumer. We... See more...
Dear AppD Team, I have a few services written in .NET Core running on Linux containers. I'm using the latest .NET agent for Linux. Among these services are a Kafka producer and a Kafka consumer. We are finding that AppD does not see the Kafka traffic between these services. Do the .NET agents currently support Kafka? Thanks.
Hi y’all. I recently installed splunk enterprise AMI instance in EC2. Unfortunately, I am unable to access with the default credentials. I am getting ‘Server Error’. I am using the latest version. So... See more...
Hi y’all. I recently installed splunk enterprise AMI instance in EC2. Unfortunately, I am unable to access with the default credentials. I am getting ‘Server Error’. I am using the latest version. So, I am assuming below are the credentials. Username: admin Password: SPLUNK-<instance-id> I tried with just instance Id too but I am always seeing ‘server error’. I also tried terminating and launching a new one. Didn’t work. I also tried SSHing into EC2 instance and change password. I don’t have access as ec2 user. Any help with this is huglely appreciated. Thanks in advance.
i have data something like this input:   firstname=value1,lastname=value2,email=value3,address=value4.. etc firstname=value11,lastname=value12,email=value13,address=value14.. etc firstname=value... See more...
i have data something like this input:   firstname=value1,lastname=value2,email=value3,address=value4.. etc firstname=value11,lastname=value12,email=value13,address=value14.. etc firstname=value12,lastname=value13,email=value14,address=value15.. etc   output:   firstname lastname email address value1 value2, value3, value4 value11 value12, value13, value14 value12 value13, value14, value15   i want to extract this data into a table with keys as column headers. Please note these keys are dynamic and it can have any names. i tried      search | extract pairdelim="," kvdelim="="   but not sure how to put them into a table format. any inputs?
Our deployer instance is getting the following error: Snapshots are supposed to be created every 60 seconds, but at least 94142 seconds have passed since last successful snapshot creation. And th... See more...
Our deployer instance is getting the following error: Snapshots are supposed to be created every 60 seconds, but at least 94142 seconds have passed since last successful snapshot creation. And the time since the last successful snapshot creation keeps increasing.
I would like to use indexRouting to move some log lines to a given index and have other log lines go to athe HEC's default index.  The log lines that I want to route are single-line json formatted as... See more...
I would like to use indexRouting to move some log lines to a given index and have other log lines go to athe HEC's default index.  The log lines that I want to route are single-line json formatted as a HEC event.  Below is a pretty-printed example:   { "event":{ "device":{ "id":"dcef6f000bc7a6baffc0f0b5f000", }, "logMessage":{ "description":"Publishing to web socket", "domain":"WebSocketChannel", "severity":"debug" }, "topic":"com.juneoven.dev.analytics" }, "index":"analytics_logs_dev", "level":"INFO", "source":"dev.analytics", "sourcetype":"analytics-logs", "time":1630091106.076237 }     Other log lines are normal text logs (non-json formatted):   2021-08-27 19:09:14,295 INFO [tornado.access] 202 POST /1/analytics/log (10.110.4.224) 35.62ms     I see that there is a customFilter feature.  I am hoping that Ican  key off of the 'index' field in the HEC event to route these json log lines to their index and allow all other lines to go to the default index for the HEC. Is that possible?  Is there some documentation that would help me?  Thanks.
After upgrading Splunk Enterprise to version 8.2.2 from 8.0.x, Splunk will not start on my Indexer/Search head. When I start it I get the following error: Any ideas on what could be causing this... See more...
After upgrading Splunk Enterprise to version 8.2.2 from 8.0.x, Splunk will not start on my Indexer/Search head. When I start it I get the following error: Any ideas on what could be causing this or places to check?   Thanks!  
Hi I am trying to find the min, max and AVG for Percentile 99,90 and 75 with the bellow:   index="main" source="C:\\inetpub\\logs\\LogFiles\\*" host="WIN-699VGN4SK4U" | eval responseTime=round(tim... See more...
Hi I am trying to find the min, max and AVG for Percentile 99,90 and 75 with the bellow:   index="main" source="C:\\inetpub\\logs\\LogFiles\\*" host="WIN-699VGN4SK4U" | eval responseTime=round(time_taken/1000) | timechart span=1mon perc99(time_taken) as 99thPercentile perc90(time_taken) as 90thPercentile perc75(time_taken) as 75thPercentile | stats min(99thPercentile) max(99thPercentile) avg(99thPercentile) min(90thPercentile) max(90thPercentile) avg(90thPercentile) min(75thPercentile) max(75thPercentile) avg(75thPercentile) by _time     min(99thPercentile) max(99thPercentile) avg(99thPercentile) min(90thPercentile) max(90thPercentile) avg(90thPercentile) min(75thPercentile) max(75thPercentile) avg(75thPercentile) 66.50 66.50 66.50 12.5 12.5 12.5 5.984375 5.984375 5.984375     However all the numbers are coming back the same, Any ideas?   Thanks   Joe
I have two logfiles, logfile1.log and logfile2.log. I have created their own field extractions for both of them. Here is an example line for both logs: logfile1.log: file1time, epoch, file1ID, name... See more...
I have two logfiles, logfile1.log and logfile2.log. I have created their own field extractions for both of them. Here is an example line for both logs: logfile1.log: file1time, epoch, file1ID, name, flag, stat1, stat2, stat3 logfile2.log: lastruntime, file2ID, epoch What I need to do is compare logfiles each against the ID's, ensure that they're the same, and output the "name" field in the search that has logfile2.log as it's source. There's probably a very easy way to do this, but I can't think of it. Any help would be greatly appreciated. Thanks!
What OOTB (Fresh install) features of Splunk Enterprise or ES should be kept? Turned off or ON per your expert opinion please? To get the best of Splunk Core / ES? Thank u in advance.