All Topics

Top

All Topics

I've been working on a search that I *finally* managed to get working that would look for events generated by a provided network switch and port name and then gives me all the devices that have conne... See more...
I've been working on a search that I *finally* managed to get working that would look for events generated by a provided network switch and port name and then gives me all the devices that have connected to that specific port over a period of time. Fortunately, most of the device data is included alongside the events which contain the switch/port information.....that is....evenything except the hostname. Because of this, I've tried to use the join command to perform a second search through a second data set which contains the hostnames for all devices which have connected to the network and match those hostnames based on the shared MAC address field. The search works, and that's great, but it can only work over a time period of about a day or so before the subsearch breaks past the 50k event limit. Is these anyway I can get rid of the join command and maybe use the stats command instead? That's what simialr posts to this one seem to suggest, but I have trouble wrapping my head around how the stats command can be used to correlate data from two different events from different data sets.....in this case the dhcp_host_name getting matched to the corresponding device in my networking logs. I'll gladly take any assistance. Thank you.       index="indexA" log_type IN(Failed_Attempts, Passed_Authentications) IP_Address=* SwitchID=switch01 Port_Id=GigabitEthernet1/0/13 | rex field=message_text "\((?<src_mac>[A-Fa-f0-9]{4}\.[A-Fa-f0-9]{4}\.[A-Fa-f0-9]{4})\)" | eval src_mac=lower(replace(src_mac, "(\w{2})(\w{2})\.(\w{2})(\w{2})\.(\w{2})(\w{2})", "\1:\2:\3:\4:\5:\6")) | eval time=strftime(_time,"%Y-%m-%d %T") | join type=left left=L right=R max=0 where L.src_mac=R.src_mac L.IP_Address=R.src_ip [| search index="indexB" source="/var/logs/devices.log" | fields src_mac src_ip dhcp_host_name] | stats values(L.time) AS Time, count as "Count" by L.src_mac R.dhcp_host_name L.IP_Address L.SwitchID L.Port_Id  
I've piped a Splunk log query extract into a table showing disconnected and connected log entries sorted by time. NB row 1 is fine. Row 2 is fine because it connected within 120 sec. Now I want to ... See more...
I've piped a Splunk log query extract into a table showing disconnected and connected log entries sorted by time. NB row 1 is fine. Row 2 is fine because it connected within 120 sec. Now I want to show "disconnected" entries with no subsequent "connected" row say within a 120 sec time frame.  So, I want to pick up rows 4 and 5. Can someone advise on the Splunk query format for this? Table = Connect_Log Row Time Log text 1 7:00:00am connected 2 7:30:50am disconnected 3 7:31:30am connected 4 8:00:10am disconnected 5 8:10:30am disconnected
Hi folks! I want to create a custom GeneratingCommand that makes a simple API request, but how do I save the API key in passwords.conf? I have a default/setup.xml file with the following content:  ... See more...
Hi folks! I want to create a custom GeneratingCommand that makes a simple API request, but how do I save the API key in passwords.conf? I have a default/setup.xml file with the following content:   <setup> <block title="Add API key(s)" endpoint="storage/passwords" entity="_new"> <input field="password"> <label>API key</label> <type>password</type> </input> </block> </setup>   But when I configure the app, the password (API key) is not saved in the app folder (passwords.conf). And if I need to add several api keys, how can I assign names to them and get information from the storage? I doubt this code will work:   try: app = "app-name" settings = json.loads(sys.stdin.read()) config = settings['configuration'] entities = entity.getEntities(['admin', 'passwords'], namespace=app, owner='nobody', sessionKey=settings['session_key']) i, c = entities.items()[0] api_key = c['clear_password'] #user, = c['username'], c['clear_password'] except Exception as e: yield {"_time": time.time(), "_raw": str(e)} self.logger.fatal("ERROR Unexpected error: %s" % e)    
Hi All i have a csv look up with below data Event_Code AUB01 AUB36 BUA12 i want to match it with a dataset which has field name  Event_Code with several values i need to extract the count... See more...
Hi All i have a csv look up with below data Event_Code AUB01 AUB36 BUA12 i want to match it with a dataset which has field name  Event_Code with several values i need to extract the count of the event code per day from the matching csv lookup  my query index=abc |rex field=data "\|(?<data>[^\.|]+)?\|(?<Event_Code>[^\|]+)?\|" | lookup dataeventcode.csv Event_Code | timechart span=1d dc(Event_Code) however the result is showing all 100 count per day instaed of matching the event code from the CSV and then give the total count per day
Dear experts Why is the following line   | where my_time>=relative_time(now(),"-1d@d") AND my_time<=relative_time(now(),"@d")   Accepted as a valid statement in a search window, but as soon I wa... See more...
Dear experts Why is the following line   | where my_time>=relative_time(now(),"-1d@d") AND my_time<=relative_time(now(),"@d")   Accepted as a valid statement in a search window, but as soon I want to use exactly this code in a dashboard, I get the error message: "Error in line 100: Unencoded <" ? The dashboard code validator somehow fails with the <= comparison.  >= works, as well = but not <=  We're on splunkcloud. 
I'd like to take an existing Splunkbase app and make a few changes to the bin files. What is the proper way to do this in Splunk Cloud?  I'm guessing I should modify it, save it to a tar.gz and uplo... See more...
I'd like to take an existing Splunkbase app and make a few changes to the bin files. What is the proper way to do this in Splunk Cloud?  I'm guessing I should modify it, save it to a tar.gz and upload/install it as a custom app?
Hello, I know that it is necessary to do this for the forwarders but I would like to confirm whether it is necessary to add other components (such as indexers, search heads) as clients to our Depl... See more...
Hello, I know that it is necessary to do this for the forwarders but I would like to confirm whether it is necessary to add other components (such as indexers, search heads) as clients to our Deployment Server in Splunk environment. We currently have a Deployment Server set up, but we are unsure if registering the other instances as clients is a required step, or if any specific configurations need to be made for each type of component. Thank you in advance. Best regards,
Hello, Is it possible to send logs (for example: /var/log/GRPCServer.log) directly to Splunk Observability Cloud using  Splunk Universal Forwarder? If yes, how we can configure Splunk Universal For... See more...
Hello, Is it possible to send logs (for example: /var/log/GRPCServer.log) directly to Splunk Observability Cloud using  Splunk Universal Forwarder? If yes, how we can configure Splunk Universal Forwarder to send logs to Splunk Observability Cloud directly as we don't have the IP Address / hostname of Splunk Observability Cloud as well the 9997 port open atSplunk Observability Cloud end, like in general we can the below steps to configure Splunk Universal Forwarder to Splunk Enterprise/Cloud as mentioned below:  Add IP_Address/Host_Name where the log has to be sent "./splunk add <IP/HOST_NAME>:9997" Add the file whose log has to collected "./splunk add monitor /var/log/GRPCServer.log"  Thank You
Would anyone be able to help me on one more thing please !!!  I have a Number display dashboard which represent the BGP flap details as # Device_name & #BGP peer IP , however I cannot add the timing... See more...
Would anyone be able to help me on one more thing please !!!  I have a Number display dashboard which represent the BGP flap details as # Device_name & #BGP peer IP , however I cannot add the timing when the BGP flap on Number display Current Query : index="network" %BGP-5 *clip* | rex field=_raw "^(?:[^ \n]* ){4}(?P<Device_name>[^:]+)" | dedup Device_name,src_ip | stats count by Device_name,src_ip,state_to | eval primarycolor=case(state_to="Down", "#D93F3C", state_to="Up", "#31A35F") | eval secondarycolor=primarycolor     Is there something we can add to display flap time in the same number display  
Hello Splunkers!! I have reassigned all the knowledge objects of 5 users to admin user. After that those users are not visible in user list when I am logging with "Admin User". Please help to identi... See more...
Hello Splunkers!! I have reassigned all the knowledge objects of 5 users to admin user. After that those users are not visible in user list when I am logging with "Admin User". Please help to identify the root cause and to fix this scenario. Thanks in advance.
Hello, I just wanted to know more detailed information so I opened the case. About Alert settings. I set  Threshold '90' , Trigger 'Immediately'  and Alert when ' Above '  If the above settings a... See more...
Hello, I just wanted to know more detailed information so I opened the case. About Alert settings. I set  Threshold '90' , Trigger 'Immediately'  and Alert when ' Above '  If the above settings are Does the alarm occur from 90.1? I remember in the beginning, if I set it to 90, it was registered as 89. It's currently set up that way I would like to know if an alert is occurring at 89.1. In case an alarm occurs at 89.1, I need to fix it as soon as possible Please reply   Thank you !!!  
This is my first time using splunk cloud. And I'm trying to perform field extraction directly in the heavy forwarder before indexing the data. I created REPORT and TRANSFORM in props.conf with trans... See more...
This is my first time using splunk cloud. And I'm trying to perform field extraction directly in the heavy forwarder before indexing the data. I created REPORT and TRANSFORM in props.conf with transform.conf configured using regex expression tested and functional in splunk Cloud through field extract, but it does not work when trying to use HF Are there any limitations on data extraction when using heavy forwarder to Splunk Cloud?
Hey Splunky People! We are excited to share the latest updates in Splunk Enterprise 9.4. In this release we have many awaited features and enhancements for both analysts and admins, helping you furth... See more...
Hey Splunky People! We are excited to share the latest updates in Splunk Enterprise 9.4. In this release we have many awaited features and enhancements for both analysts and admins, helping you further your organizational progress toward digital resilience.  Comprehensive Visibility Deployment Server 9.4 Enhancements: Provides a centralized interface to manage and troubleshoot Splunk agents, with a new UI for improved user experience and accessibility compliance. Health and Status Overview: New capabilities for monitoring the health and status of agents, enhancing visibility into deployment. Federated Search for Splunk: Enhanced support for metric indexes and eventcount across modes, improving visibility into remote Splunk platforms. Dashboard Studio Enhancements: Updates to Dashboard Studio for better visualization and insights. Read what’s new.  SPL2 Public Beta: Offers flexibility for custom app development, enhancing control over the Splunk ecosystem. Rapid Detection & Investigation Enhanced Search Commands: Updates to the foreach command and support for mcatalog in federated searches, facilitating more effective search capabilities. Eval Function Enhancements: New functions for data type conversion and type testing, aiding in efficient data manipulation and investigation. Eliminate SHC Out-of-Sync Issues: Improved SHC replication to reduce errors and streamline search head cluster management. Optimized Response Quarantine of Large CSV Lookups: Automatic quarantining of large lookups to prevent replication issues, ensuring smoother operations. Workload Management with cgroups v2: Support for Linux cgroups version 2, optimizing resource management and response efficiency. There are additional updates and enhancements that we’ve released that provide platform stability (KVStore Upgrade to 7.0) and enhanced user experience, supporting the overall usability and performance of Splunk Enterprise.  Check out the 9.4 release notes for additional details.  Python 2 is in the process of complete removal and soon will no longer be available in coming releases jQuery v3.5 library is now set as the platform default; prior jQuery libraries are no longer supported
I am in the process of implementing Splunk in a fairly long-lived environment.  Log directories contain date-masked log files.  I would like to ignore files before today's date, and only import new f... See more...
I am in the process of implementing Splunk in a fairly long-lived environment.  Log directories contain date-masked log files.  I would like to ignore files before today's date, and only import new files.  Example:  /opt/someApplication/logs/someApplication.202412160600.out I am unable to wildcard /opt/someApplication/logs/someApplication.*.out as there are logs dating back to 2017 and I'd exceed our daily license/quota by several orders of magnitude.  Changing the logging format is not an option.  Exclude-lists appear to be a solution, but even using regex would be incredibly burdensome. Thoughts?    
I am new to Splunk and am teaching myself how to us it as I integrate it with my environment. I inherited an existing Splunk Enterprise instance that, at one point, apparently used to work to some d... See more...
I am new to Splunk and am teaching myself how to us it as I integrate it with my environment. I inherited an existing Splunk Enterprise instance that, at one point, apparently used to work to some degree but by the time I joined the team and took over had fallen into disuse. After getting it upgraded from 9.0 to 9.3.2, rolling out Universal Forwarders, tinkering with inputs.conf, and fixing some network issues, I found myself finally able to get Windows Event Log data into my indexer from a couple of different test machines. The inputs.conf I was using was something I had found on one of the existing machines before reinstalling the UF, and I noticed that it had a lot more stuff in it than Windows Event Log stanzas.  Some of it was suggesting it monitored stuff I was interested in right now, such as CPU utilization.  However, I noticed that exactly nothing outside of Windows Event data was ever making it across the wire, no matter how I reconfigured the inputs.conf stanzas. The one I honed in on first was the CPU utilization, and through research I discovered that when I invoke a stanza in inputs.conf it has to exist in some degree within the Settings > Data Inputs library (?) present on my Splunk instance. perfmon://CPU, perfmon://CPULoad, and perfmon://Processor were all stanzas I found online for (among other things) checking to see what % CPU utilization a target server was at.  None of them worked.  Looking into these Data Inputs, it looks like something is broken - when I select these three (as an example) Splunk's web UI throws up an error saying that "Processor is not a valid object".   Following some guidance online, I was able to make my own custom Data Input just called testCPU, pointing at a custom index I call testWindows, and basically make it a clone of CPU (taking in % Processor Time and % User Time as counters and whatnot).  For the required object, I noticed that "Processor Information" was an option I could pick rather than "Processor", so I went with that one.  I then deployed a stanza in inputs.conf that says perfmon://testCPU on one of my UFs, and it absolutely works.  My Indexer is now pulling in CPU % use information.  I suspect if I went back to the three CPU-related entries above and set it to "Processor Information", it would work and any of the existing Apps I inherited that invoke those stanzas would themselves start pulling in data through it. However, I do not know why my built-in Data Inputs are broken - it isn't just limited to the CPU ones I used as an example above.  For example, the "System" input claims "System is not a valid object" and the available objects dropdown does not have an obvious replacement (there's no "System Information" to follow the pattern above).  The "PhysicalDisk" DI claims "PhysicalDisk is not a valid object" but has nothing obvious to replace it either.  Available Memory claims "Memory" is not a valid object with no obvious replacement, etc. Does anyone know what might be going on here?  Looking at how the Stanzas are configured online the examples I see for the handful above I have looked into do in fact invoke object = "xxx" that matches the names of things my Splunk says isn't valid.  Some of these might have some obvious replacements ("Network" might be "Physical Network Card Activity" or something like that) but a lot of them don't. How should I go fix these?  My first assumption was that I would find some kind of "Objects" config file that may have clues to how these got redefined, but that wasn't the case. I have a ticket in with support, but I am broadening the scope here to see if anyone else has familiarity with something like this (and also to create something for another user with the same issue to find in the future).
I am developing a splunk setup using docker image and Podman.  I am trying to setup 2 indexers along with an indexer manager.  Each container will run on separate rhel vm.  I successfully set up the ... See more...
I am developing a splunk setup using docker image and Podman.  I am trying to setup 2 indexers along with an indexer manager.  Each container will run on separate rhel vm.  I successfully set up the Manager.  I then go to register the indexer as a peer and enter in the vm host IP of the manager and successfully register the indexer as a peer.   When I reboot and check the indexer manager, it shows the indexer peer is up and up but shows the ip address of the manager container for the indexer peer?  When I try to add another indexer it does the same thing and will not let me add another indexer.  I have tried statically assigning IPs and confirmed all IPs are different etc.  I wasn't sure If anyone has ran into this issue. All vm hosts are on the same subnet and can communicate.  Firewall off and selinux off. Using 9887 as rep port and 8089 as manager comms port.  I am running as rootless outside and root inside.  It has to be a permission or file that I am missing.  I set it up as root:root and it works perfect.  Any ideas I appreciate it. 
Hello, I have an issue where I was part of multiple roles on Splunk Enterprise and Splunk Enterprise Security, the same role and saml group has access to all the indexes, On the Splunk Enterprise i ... See more...
Hello, I have an issue where I was part of multiple roles on Splunk Enterprise and Splunk Enterprise Security, the same role and saml group has access to all the indexes, On the Splunk Enterprise i am part of 3 roles(A, B, C) which has search filters but I am already part of role D which has access to all indexes but when I am trying to search any data, I am not getting any data, But On Enterprise Security SH, I am able to view all the data as expected. Is it something like precedence issue on Splunk Enterprise SH that is causing the issue?Please help me.     Thanks
I got an alert working "for each result" by using a query that creates the following table: errorType             count Client                  10 Credentials      50 Unknown             5 How d... See more...
I got an alert working "for each result" by using a query that creates the following table: errorType             count Client                  10 Credentials      50 Unknown             5 How do I set a different threshold for each result? I tried using a custom trigger as follows and was hoping to only get an email for "client" and "credentials", but I still get all 3.   search (errorType = "Client" AND count > 8 ) OR (errorType = "Credentials" AND count > 8 ) OR (errorType = "Other" AND count >8 )      
Hello,  I am experiencing intermittent log ingestion issues on some servers and have observed potential queue saturation in the process. Below are the details of the issue and the related observatio... See more...
Hello,  I am experiencing intermittent log ingestion issues on some servers and have observed potential queue saturation in the process. Below are the details of the issue and the related observations: Setup Overview: I am using Packetbeat to capture DNS queries across multiple servers. Packetbeat generates JSON log files, rotating logs into 10 files, each with a maximum size of 50 MB. Packetbeat generates 3-4 JSON files every minute Setup -> Splunk Cloud 9.2.2 , On-Prem Heavy Forwarder 9.1.2 , and Universal Forwarder 9.1.2 Example list of Packetbeat log files (rotated by Packetbeat): packetbeat.json packetbeat.1.json packetbeat.2.json packetbeat.3.json ... packetbeat.9.json Issue Observed: On some servers, the logs are ingested and monitored consistently by the Splunk agent, functioning as expected. However, on other servers: Logs are ingested for a few minutes, followed by a 5–6-minute gap. This cycle repeats,  resulting in missing data in between, while other data collected from the same server ingesting correctly.  Additional Observations: While investigating the issue, I observed the following log entry in the Splunk Universal Forwarder _internal index:       11-15-2024 17:27:35.615 -0600 INFO HealthChangeReporter - feature="Real-time Reader-0" indicator="data_out_rate" previous_color=yellow color=red due_to_threshold_value=2 measured_value=2 reason="The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data." host = EAA-DC index = _internal source = C:\Program Files\SplunkUniversalForwarder\var\log\splunk\health.log sourcetype = splunkd​       The following conf applied to all DNS servers: limits.conf       [thruput] maxKBps = 0       server.conf       [queue] maxSize = 512MB       inputs.conf       [monitor://C:\packetbeat.json] disabled = false index = dns sourcetype = packetbeat             Any direction to resolve this is appreciated! Thank you!
Hi Team,  In below query I don't want to show up the result as "Up" in state_to field, I just want to see data with down state , also it not possible to use (exclude operator state_to!=Up ) because... See more...
Hi Team,  In below query I don't want to show up the result as "Up" in state_to field, I just want to see data with down state , also it not possible to use (exclude operator state_to!=Up ) because it is showing all down results which is not my aim.   Please help and suggest !   My query :   index="network" %BGP-5 *clip* | dedup src_ip | stats count by state_to,Device_name,src_ip with exclude result: ( which I dont want) My expected result/AIM :  it will just show the result the devices which are down at the moment and dont want see the UP result       AIM :  With the help of query