All Topics

Top

All Topics

Hi, I have a huge set of data with different emails in it , I want to setup email alerts for few parameters. But the issue is I'm unable to group the events on email and send a email alert with t... See more...
Hi, I have a huge set of data with different emails in it , I want to setup email alerts for few parameters. But the issue is I'm unable to group the events on email and send a email alert with the csv attachment of the results. Example:- abc@email has around 80 events in the table , I want to send only one alert to abc with all the 80 events in it as csv attachment. And there are around 85+ emails in my data , and they have to be grouped using only 1 spl and it should be used in alert. Note :- dont suggest $result.field$  or stats to group its not useful for me. Thank you
We are using Splunk forwarder v9.0.3. One of the X509 validation we would like to have against TLS server certificate coming from the Splunk Indexer is ExtendedKeyUsage(EKU) validation for Server aut... See more...
We are using Splunk forwarder v9.0.3. One of the X509 validation we would like to have against TLS server certificate coming from the Splunk Indexer is ExtendedKeyUsage(EKU) validation for Server authentication.  We generated the TLS server certificate without the ExtendedKeyUsage to test this use case. However, Splunk forwarder is still accepting the TLS server certificate. Ideally, it should allow only when ExtendedKeyUsage is set to Server authentication. Is this a known limitation or does it require a configuration change to perform this EKU validation? Please advise. Below is our outputs.conf contents.   [tcpout-server://host:port] clientCert = /<..>/clientCert.pem sslPassword = <..> sslRootCAPath = /<..>/ca.pem sslVerifyServerCert = true sslVerifyServerName = true  
I have setup Cluster master, indexer cluster & Search head cluster. I have a new environment for monitoring console. When I go to  Settings > Monitoring Console > Settings > General Setup  & switch t... See more...
I have setup Cluster master, indexer cluster & Search head cluster. I have a new environment for monitoring console. When I go to  Settings > Monitoring Console > Settings > General Setup  & switch to Distributed mode servers are not showing up under remote instances. Can someone help me on it.
Hi    How to display the day / month / time / year like the below format using simple format    Ex- | make result   
I'm working with a query where I'm using a lookup to enrich events based on the work_queue field and then filtering to pass forward only those events with a matching entry in the lookup file. Here... See more...
I'm working with a query where I'm using a lookup to enrich events based on the work_queue field and then filtering to pass forward only those events with a matching entry in the lookup file. Here’s a simplified version of my query: index="acn_ticket_summary" | lookup Master.csv "AssignmentGroup" as work_queue outputnew Desk_Ind, cdl_gs, Support_Team | where isnotnull(work_queue) This filters the events, keeping only those that have a non-null work_queue after the lookup. Requirement: I also need to capture the events that don’t match (i.e., those that result in isnull(work_queue)) for separate calculations. Is there a way to modify my query to keep both the matched and unmatched events? Thank you in advance for your help!
Hello There,     I would like to pass two diffrent values as a token, the search consists of code as a token, where code field can be single values or with multiple values, we need to calculate the ... See more...
Hello There,     I would like to pass two diffrent values as a token, the search consists of code as a token, where code field can be single values or with multiple values, we need to calculate the length and if the length is equal to 1, then we need pass value_1., if the length is greater than 1, then we need to pass value_2 in a new token, index=03_f123456 sourcetype=logs*  (CODE IN ($code$)) | eval x=len($code$) | eval y=if(x=1,"value_1",value_2") |dedup y |table y Thanks in advance!
I would like for the font size in the table I have made to be much bigger. Currently the largest size you can select in the font size drop down under colour and style is large. How can I make the num... See more...
I would like for the font size in the table I have made to be much bigger. Currently the largest size you can select in the font size drop down under colour and style is large. How can I make the numbers in my table bigger?
Hi,  We currently have a centralized WEF collection server that collects all windows logs across the environment. This includes forwarding sysmon,application,system channels etc... to the collector... See more...
Hi,  We currently have a centralized WEF collection server that collects all windows logs across the environment. This includes forwarding sysmon,application,system channels etc... to the collector. Everything ends up in ForwardedEvents on the WEF collection server. I've installed a UF on this host.  I have the windows TA deployed with the following input stanza       #[WinEventLog://ForwardedEvents] #disabled = 0 #index = wef #start_from = oldest #current_only = 0 #batch_size = 50 #checkpointInterval = 15 #renderXml=true #host=WinEventLogForwardHost       I have 2 problems currently.  The splunk universal forwarder doesn't appear to be keeping up with the number of windows event logs coming to the WEF collector. ~1000 hosts. Another (different) SIEM collector for WEF keeps up fine on the same host and collects all logs. i'm able to compare what one collector is collecting vs the Splunk UF. I've tried adjusting the batch_size and checkpoint interval as above.   I want to split certain windows channels in the ForwardedEvents channel to different indexes. I have tried deploying the microsoft sysmon TA and adding a new input with the following configuration.       #[WinEventLog://ForwardedEvents] #disabled = true #index = wef-sysmon #start_from = oldest #current_only = 0 #batch_size = 50 #checkpointInterval = 15 #renderXml=true #host=WinEventLogForwardHost #whitelist = $XmlRegex='Microsoft-Windows-Sysmon'​       i then add  blacklist = $XmlRegex='Microsoft-Windows-Sysmon' to the windows TA. Then everything seems to stop. I stop receiving all events on my indexer. I've also tried adding multiple inputs with differing indexes and whitelist/blacklists in the windows TA to no avail. Would someone be able to point me in the right direction?      
I work in the Healthcare industry and our customer base can have product versions that range from 6 to 18.  For this dashboard, sites with versions less than 15 I have to use one data source.  Sites ... See more...
I work in the Healthcare industry and our customer base can have product versions that range from 6 to 18.  For this dashboard, sites with versions less than 15 I have to use one data source.  Sites that have versions 15 and over, I have a different set of data sources.   For this dashboard, I have one query for versions below 15 and another query for version 15 and above.  I have built a dropdown that lists the Site Name for choices.  There is also a time picker to choose date ranges.  In order to choose the correct query to run, I need to somehow pass the product version so it knows which one to run and display.  How do I create the product version as a token to pass down to decide which query to use?   Here is the start of my dashboard code.  Below it is just the two queries I will be choosing from. <fieldset submitButton="true" autoRun="false"> <input type="dropdown" token="propertyId" searchWhenChanged="false"> <label>Site</label> <fieldForLabel>FullHospitalName</fieldForLabel> <fieldForValue>propertyId</fieldForValue> <search> <query>| inputlookup HealthcareMasterList.csv | search ITV=1 AND ITV_INSTALLED&gt;1 | table propertyId FullHospitalName MarinaVersion | join type=left propertyId [ search sourcetype=sysconfighost-v* [| inputlookup HealthcareMasterList.csv | search ITV=1 AND ITV_INSTALLED&gt;1 | fields propertyId | format] | dedup propertyId hostId sortby -dateTime | stats max(coreVersion) as coreVersion by propertyId] | eval version=if(isnull(coreVersion),MarinaVersion,coreVersion) | eval version=substr(version,1,2) | fields - MarinaVersion coreVersion | sort FullHospitalName</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> <input type="time" token="field1" searchWhenChanged="false"> <label>Date Picker</label> <default> <earliest>-1mon@mon</earliest> <latest>@mon</latest> </default> </input> </fieldset> With the query above I end up with three fields:  propertyId, FullHospitalName, version. The 'FullHospitalName' is what is displayed in the dropdown.   The 'propertyId' is what needs to be passed to the query itself to know what data to collect.  How do I use the 'version' field to determine which of the two queries to use?
How can I make it show me only what appears as null in the Call.CallForwardInfo.OriginalCalledAddr field? Right now I have this result, you can help me.  
  Does anyone have AWS EC2 instance dashboard sample? Also I am looking for EC2 instance OS/EBS/networking error code list to build the dashboard and query. Thanks, Muhammad
First of all, English isn't my native language, so I apologize in advance for any error I could write in this support topic. I encounter a problem I'm a bit lost with : I'm indexing a lot of differe... See more...
First of all, English isn't my native language, so I apologize in advance for any error I could write in this support topic. I encounter a problem I'm a bit lost with : I'm indexing a lot of different data with different sourcetypes (mostly CSV and JSON data, but with a bit of unstructured data here and there), and the eventcount and tstats commands are returning a whole lot different count of events. I know the eventcount command doesn't care about the time window, so I tried increasing the time window in the future until the maximum supported by Splunk, but to no avail. To talk numbers, in my instance the command "eventcount index=XXX*  " returns a number of 160 millions events in my indexes. When I try to do a command "| tstats count where index=XXX* by sourcetype", the command only find about 59 millions of events. Even increasing the time window with a "latest=+4824d" to reach the maximum supported by the software doesn't yield more events. I thought about frozen data, so I increased the time window before freezing events just for debugging, deleted all my data, reindexed them all, but to no avail. Is it possible for a event to be indexed without a sourcetype ? Or is there technological wizardry i'm not aware about ?
I signed up for the Splunk Cloud Platform free trial as part of an online class. However, I'm unable to access my instance. I see that an instance has been created, but nothing happens when I click t... See more...
I signed up for the Splunk Cloud Platform free trial as part of an online class. However, I'm unable to access my instance. I see that an instance has been created, but nothing happens when I click the "Access instance" button. I also got an email with a temporary password for the instance, but the login fails, and I got locked out after several attempts. Anyone know how to resolve this? Update: I was able to log in after resetting the password and waiting for the lockout to expire, but the "Access instance" button is still unresponsive.
Hello, I am trying to join two indexes to display data from our local printers.  I have an index getting data from our printer server that contains the following data:    index=prntserver _time,   ... See more...
Hello, I am trying to join two indexes to display data from our local printers.  I have an index getting data from our printer server that contains the following data:    index=prntserver _time,                                   prnt_name     username   location 2024-11-04 11:05:32    Printer1           jon.doe         Office 2024-11-04 12:20:56    Printer2           tim.allen       FrontDesk   I have an index getting data from our DLP software that contains the following data:    index=printlogs _time                                    usersname     directory                          file 2024-11-04 11:05:33    jon.doe             c:/desktop/prints/     document1.doc 2024-11-04 12:20:58    tim.allen  c:/documents/files/   document2.xlsx   I am trying to join the two indexes to give me time, printer name, user name and location from the Print Server Index and then give me directory and file name that was recorded from my Print Log Index.  I am wanting to use time to join the two indexes but my issues is that the timestamp is off by 1 if not 2 seconds between the two index records.  I was trying to use the transaction command with a maxspan=3s to be safe but cannot get it to work.  Here is what I have been trying to work with   index=printserver | convert timeformat="%Y-%m-%d %H:%M:%S" ctime(_time) AS servtime    | join type=inner _time       [ search index=printlogs         | convert timeformat="%Y-%m-%d %H:%M:%S" ctime(_time) AS logtime       ] | transaction startswith=eval(src="<servtime>") endswith=eval(src="<logtime>") maxspan=3s | table servtime prnt_name username location directory file   Thanks for any assistance given on this one. 
I have data similar to: Field-A Field-B A1           B1 A1           B2 A1           B3 A2           B4 A3           B5 A2           B6 Where Field-A will repeat but Field-B is unique values.... See more...
I have data similar to: Field-A Field-B A1           B1 A1           B2 A1           B3 A2           B4 A3           B5 A2           B6 Where Field-A will repeat but Field-B is unique values.  I am using | stats count by Field-A to give me the number of occurrences of A1, A2, A3 and am trying to include a single example of Field-B.  Something like: Field-Count-Example A1 -- 3 -- B2 A2 -- 2 -- B6 A3 -- 1 -- B5 Thank you for any suggestions.  
I have a working dashboard where a token is used as a variable. But now I am trying to use the same concept when making a direct search within "Search & Reporting app".  I have Windows events that ha... See more...
I have a working dashboard where a token is used as a variable. But now I am trying to use the same concept when making a direct search within "Search & Reporting app".  I have Windows events that have multiple fields that produce a common value. In this example, the following search will give me usernames.   ...base search (member_dn=* OR member_id=* OR Member_Security_ID=* OR member_user_name=*)   I would like to declare a variable that I can use as a value to search all four aforementioned fields. I tried the following with no luck:   index=windows_logs | eval userid=johnsmith | where $userid$ IN (member_dn, member_id, Member_Security_ID, member_user_name)    
So I am trying to find the geo location for some IP addresses that keep crashing our webserver when they crawl it.  I am getting the information from the event logs. The IP addresses are coming in on... See more...
So I am trying to find the geo location for some IP addresses that keep crashing our webserver when they crawl it.  I am getting the information from the event logs. The IP addresses are coming in on a generic field called message that contains a lot of text, so I am pulling that using a rex command, but the iplocation command shows no country code. I have used the iplocation command to get geo information about IP addresses in the past several hours on another search, so I know that works in my system.  When I use the where | where ip_address='ip-address' command it shows no data. So I'm guessing that Splunk doesn't see the text in the created field of ip_address as actual IP addresses.  Anyone know how I can make it see this data as an IP address? Or is it that there might be a leading space or something like that that is causing the issue and if so how do I get rid of that noise? index="eventlog" EventCode=1309 | rex field=Message "User host address:\s(?<ip_address>.*)" | iplocation ip_address=Country | table ip_address, Country
The University of Nevada, Las Vegas (UNLV) is another premier research institution helping to shape the next generation of cybersecurity professionals through its student-powered SOC program. And, th... See more...
The University of Nevada, Las Vegas (UNLV) is another premier research institution helping to shape the next generation of cybersecurity professionals through its student-powered SOC program. And, the Splunk Academic Alliance program is a cornerstone for fostering that talent. Through this program, UNLV students, faculty, and IT staff receive no-cost access to Splunk training and certifications, crucial for growing talent today that benefits society tomorrow. The program equips them with a renewable one-year, 10GB license for Splunk software, alongside access to eLearning resources and additional benefits, ensuring equitable access to tools for solving global challenges.   Our latest case study about Splunk and UNLV highlights the hands-on application of this initiative. Not only does the university use Splunk Enterprise Security in its SOC, but the SOC is now student-powered. Jason Griffin, who oversees Splunk Enterprise Security for the campus, is also a professor who teaches graduate-level security data analytics courses using the Splunk Academic Alliance. He integrates the Academic Alliance training directly into his curriculum. Initially optional, this training has now become a fundamental part of his teaching, linking analytics and Splunk in a real-world context that enhances both student learning and campus security. Beyond the classroom, the impact of the Academic Alliance program at UNLV extends to university employees. The cybersecurity team at UNLV is also trained through the program, ensuring that the entire security apparatus is proficient in the latest Splunk technologies. This comprehensive educational approach not only keeps the material fresh for instructors like Professor Griffin but also continually advances the cybersecurity capabilities of the university. Through such initiatives, UNLV is not just a beneficiary of the Splunk Academic Alliance but a vibrant example of its success in action.  For more detailed insights, you can explore the full case study here.
I've been using dbxquery connection=my_connection procedure=my_procedure to build reports and a few that my DBAs have built require time inputs, one I'm working on expects parameter '@StartDate',  Is... See more...
I've been using dbxquery connection=my_connection procedure=my_procedure to build reports and a few that my DBAs have built require time inputs, one I'm working on expects parameter '@StartDate',  Is there a way to pass that through to the stored proc?
Hello all, hoping someone can help me. We are setting up IAM User Keys that are supposed to rotate on a monthly basis. We use those keys to send email from AppDynamics. I can connect to the SMTP serv... See more...
Hello all, hoping someone can help me. We are setting up IAM User Keys that are supposed to rotate on a monthly basis. We use those keys to send email from AppDynamics. I can connect to the SMTP server just fine. What I need to find out is where is this information stored so that I can create a script that will update that information when the keys get rotated. Is it in the database, and if so what table? Or if its in a file what file? Thanks for any and all help!