All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Does it have to be regex?  I'm a big fan of them, but this problem looks like it's made for multikv.
hi @jotne it works great, just small favor. how do i stop it if it sees the below line 'aggregation groups'? with data below? cause its also capturing that part, but the rest is great. ty name   ... See more...
hi @jotne it works great, just small favor. how do i stop it if it sees the below line 'aggregation groups'? with data below? cause its also capturing that part, but the rest is great. ty name                    id    speed/duplex/state            mac address       -------------------------------------------------------------------------------- ethernet1/3             66    1000/full/up                  b6:2c:23:e0:40:42 ethernet1/4             67    1000/full/up                 b6:2c:23:e0:40:43 ethernet1/5             68    10000/full/up                 b6:2c:23:e0:40:44 ethernet1/6             69    10000/full/up                 b6:2c:23:e0:40:45 ethernet1/7             70    10000/full/up                 b6:2c:23:e0:40:46 ethernet1/8             71    10000/full/up                 b6:2c:23:e0:40:47 ae1                     16    [n/a]/[n/a]/up                b6:2c:23:e0:40:10 ae2                     17    [n/a]/[n/a]/up                b6:2c:23:e0:40:11 ha1-a                   5     1000/full/up                  d1:f4:b3:c3:25:97 ha1-b                   7     1000/full/up                  d1:f4:b3:c3:25:96 vlan                    1     [n/a]/[n/a]/up                b6:2c:23:e0:40:01 loopback                3     [n/a]/[n/a]/up                b6:2c:23:e0:40:03 tunnel                  4     [n/a]/[n/a]/up                b6:2c:23:e0:40:04 hsci                    8     40000/full/up                 01:20:6c:1c:81:08 aggregation groups: 0
I have string field: provTimes: a=10; b=15; c=10; it basically has semicolon separated sub-fields in the value. Each sub-field has a number on right hand side.  These fields are dynamic, can be a,... See more...
I have string field: provTimes: a=10; b=15; c=10; it basically has semicolon separated sub-fields in the value. Each sub-field has a number on right hand side.  These fields are dynamic, can be a,v,e,f in 1 event and z,y in another event. Ignoring the sub field names, I'm only concerned with the numbers they have. I just want to add them all.  Example   provTimes: a=10; b=15; c=10;   result = 35   provTimes: x=10; b=5; result = 15
Here you go: (?<name>\S+)\s+(?<id>\d+)\s+(?<speed>[^\/]+)\/(?<duplex>[^\/]+)\/(?<state>\S+)\s+(?<mac>\S+) https://regex101.com/r/99K6Do/1
Thanks so much!
@Mr_Sneed  Could you elaborate a little more on this? Copy and paste the configurations that you were pushing to the forwarders from the deployment server.
This seems to work, but feels a little "hack-ish":   index=txdir_mainframe | transaction host maxevents=20 | dedup host If anyone has any better ideas, I am open to suggestions. Thanks, 
@ansusabu_25 the app is not ES specific but can be installed on any SH. You should have the option already shown to you and this will stop the multiple artifacts being created. At the moment, multipl... See more...
@ansusabu_25 the app is not ES specific but can be installed on any SH. You should have the option already shown to you and this will stop the multiple artifacts being created. At the moment, multiple artifacts are created due to there being 1 or more MV fields in the results data. If you set the setting already shown then you will get 1 event with 1 artifact in but the values for some of the fields will be lists which you will need to handle in playbooks, as in configure any blocks to be able to process single and list items. Previously I have used a playbook on ingest to split the MV fields into artifacts without all the additional duplication.  Hope this helps! Happy SOARing!! 
@Komal0113  Certainly! If the IDP certificate setup isn’t working as expected, you can create your own certificate for SAML configuration in Splunk. Here are the steps to achieve this: https://docs... See more...
@Komal0113  Certainly! If the IDP certificate setup isn’t working as expected, you can create your own certificate for SAML configuration in Splunk. Here are the steps to achieve this: https://docs.splunk.com/Documentation/Splunk/9.2.0/Security/Howtoself-signcertificates   
I am trying to write a search that will pull the 10 (or so) most recent events for each host. The tail and head commands apparently do not allow any grouping, and I am trying to wrap my head around h... See more...
I am trying to write a search that will pull the 10 (or so) most recent events for each host. The tail and head commands apparently do not allow any grouping, and I am trying to wrap my head around how to do this. I know this does not work, but this is what I am looking for: index=index1 | head 10 by host The closest I can come up with is:  index=index1 | stats values(_raw) by host But that still gives me everything in the time range, not just the last 10 events per host.
  Hi i would like some help to extract each line of data into separate fields of Name, ID, Speed & duplex, state, mac address. critical that "state" is its own field. getting stuck and need help. ... See more...
  Hi i would like some help to extract each line of data into separate fields of Name, ID, Speed & duplex, state, mac address. critical that "state" is its own field. getting stuck and need help. thank you Data below   name                    id    speed/duplex/state            mac address       -------------------------------------------------------------------------------- ethernet1/3             66    1000/full/up                  b6:2c:23:e0:40:42 ethernet1/4             67    1000/full/up                 b6:2c:23:e0:40:43 ethernet1/5             68    10000/full/up                 b6:2c:23:e0:40:44 ethernet1/6             69    10000/full/up                 b6:2c:23:e0:40:45 ethernet1/7             70    10000/full/up                 b6:2c:23:e0:40:46 ethernet1/8             71    10000/full/up                 b6:2c:23:e0:40:47 ae1                     16    [n/a]/[n/a]/up                b6:2c:23:e0:40:10 ae2                     17    [n/a]/[n/a]/up                b6:2c:23:e0:40:11 ha1-a                   5     1000/full/up                  d1:f4:b3:c3:25:97 ha1-b                   7     1000/full/up                  d1:f4:b3:c3:25:96 vlan                    1     [n/a]/[n/a]/up                b6:2c:23:e0:40:01 loopback                3     [n/a]/[n/a]/up                b6:2c:23:e0:40:03 tunnel                  4     [n/a]/[n/a]/up                b6:2c:23:e0:40:04 hsci                    8     40000/full/up                 01:20:6c:1c:81:08    any help will be appreciated. thanks, 
We are getting the same issue for some customers on our stack but not others.  Trying to figure it out.
We are not ES app. This option is for es app right? We are forwarding the alerts from splunk to soar using soar export app
@ujju219  To use the Splunk Add-on for MySQL Database, you’ll need to configure appropriate permissions for the MySQL user. Here are the recommended steps: MySQL User Permissions: The MySQL user a... See more...
@ujju219  To use the Splunk Add-on for MySQL Database, you’ll need to configure appropriate permissions for the MySQL user. Here are the recommended steps: MySQL User Permissions: The MySQL user account used by the Splunk Add-on requires specific permissions to interact with the database. Assign the following permissions to the MySQL user: SELECT: Required for reading data from the MySQL database. SHOW DATABASES: Needed to list available databases. SHOW TABLES: Necessary to discover tables within a database. REPLICATION CLIENT: Required for reading binary logs (if applicable). EXECUTE: Needed for executing stored procedures (if used). Database-Specific Permissions: If you’re connecting to a specific database, grant additional permissions based on your use case: Read-Only Access:If the Splunk Add-on only needs to read data, grant read-only access to the specific database and tables. Write Access:If you plan to write data back to the database (e.g., summary index), grant appropriate write permissions. Host and Port Permissions: Ensure that the MySQL user has permission to connect from the host where the Splunk instance (heavy forwarder or indexer) is running. Grant access to the specific IP address or hostname of the Splunk server. Verify that the MySQL server allows connections on the specified port (usually 3306). Secure Credentials: Store the MySQL user credentials securely in Splunk. Use Splunk’s credential management system to avoid hardcoding credentials in configuration files. Splunk DB Connect Configuration: In Splunk, configure the Splunk DB Connect input to connect to the MySQL database using the MySQL user credentials. Specify the database name, hostname, port, and other relevant details. Test the Connection: After configuring the input, test the connection to ensure successful communication between Splunk and MySQL. Verify that data retrieval works as expected. Remember to document the permissions granted to the MySQL user and monitor the data collection process. If you encounter any issues, refer to the official Splunk documentation for additional guidance.  https://docs.splunk.com/Documentation/AddOns/released/MySQL/Setup  Configure Splunk DB Connect security and access controls - Splunk Documentationhttps://docs.splunk.com/Documentation/DBX/3.15.0/DeployDBX/Configuresecurityandaccesscontrols 
Do you have permission to access the missing dashboards?
Hi, I have a KV time-based lookup generated from DHCP logs with content like this: time,ip,hostname,mac 1709093697,10.223.5.43,host-43,aa:bb:cc:dd:ee:ff and transforms.conf for it: [dhcp_timebase... See more...
Hi, I have a KV time-based lookup generated from DHCP logs with content like this: time,ip,hostname,mac 1709093697,10.223.5.43,host-43,aa:bb:cc:dd:ee:ff and transforms.conf for it: [dhcp_timebased_lookup] collection = dhcp_timebased_collection external_type = kvstore fields_list = _key,time,ip,hostname,mac max_offset_secs = 691200 min_offset_secs = 0 time_field = time time_format = %s Lookup works well when I run search which pulls events from index: index=test source=timebased | lookup dhcp_timebased_lookup ip AS dest_ip OUTPUT hostname | table _time dest_ip hostname Hostname is there: _time dest_ip hostname 1709093697 10.223.5.43 host-43   But when I use this lookup after non-event-generating commands it does not work: index=test source=timebased | table _time dest_ip | lookup dhcp_timebased_lookup ip AS dest_ip OUTPUT hostname OR index=test source=timebased | stats count BY _time dest_ip | lookup dhcp_timebased_lookup ip AS dest_ip OUTPUT hostname OR | makeresults | eval dest_ip = "10.223.5.43", _time = 1709093697 | lookup dhcp_timebased_lookup ip AS dest_ip OUTPUT hostname OR | tstats from datamodel=SomeDM count BY _time SomeDM.dest_ip span=1s | lookup dhcp_timebased_lookup ip AS "SomeDM.dest_ip" OUTPUT hostname Hostname doesn't show up. If I turn time-based setting for this lookup off it outputs hostnames for all searches above. It makes me think there is some difference between _time field in events' metadata and _time field in statistics. Is it so? And is there solution besides "join with inputlookup and addinfo" workaround?
Hi @altink , if the report is OK for you, let  the actual configuration. Ciao. Giuseppe
Hi @kate, as you can read at https://docs.splunk.com/Documentation/VersionCompatibility/current/Matrix/Compatibilitybetweenforwardersandindexers you can define the compatibility between the UFs vers... See more...
Hi @kate, as you can read at https://docs.splunk.com/Documentation/VersionCompatibility/current/Matrix/Compatibilitybetweenforwardersandindexers you can define the compatibility between the UFs versions and Splunk Enterprise; then Splunk Cloud is aligned to the last version of Splunk Enterprise. About Ubuntu, you can see at https://www.splunk.com/en_us/download/universal-forwarder.html : in few words, you have to understand wich kernel version has you Ubuntu, but in general, the last version of UF is compatible with kernel >3.x, in other words: all! Ciao. Giuseppe  
@kate  The Splunk Cloud Platform is compatible with the Universal Forwarder for data collection. Let’s determine the appropriate version of the Universal Forwarder for your use case: Universal Forw... See more...
@kate  The Splunk Cloud Platform is compatible with the Universal Forwarder for data collection. Let’s determine the appropriate version of the Universal Forwarder for your use case: Universal Forwarder Compatibility: The Universal Forwarder is the best choice for collecting data from systems in your environment with minimal resource requirements. For Splunk Cloud, you should use a Universal Forwarder that aligns with the Splunk Cloud version you are using. Recommended Version: Based on the compatibility matrix, the following Universal Forwarder versions are compatible with Splunk Cloud 9.1.2308.203: 9.0.x 9.1.x 9.2.x Deployment Considerations: Deploy the Universal Forwarder on your Ubuntu (Debian 64-bit) systems. Configure it to send data to your Splunk Cloud instance. Remember to verify the compatibility and monitor the data collection process. You can find more details in the Splunk documentation. https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsingforwardingagentsCloud  https://docs.splunk.com/Documentation/VersionCompatibility/current/Matrix/Compatibilitybetweenforwardersandindexers  https://docs.splunk.com/Documentation/Forwarder/9.2.0/Forwarder/Deploy 
Which version of Universal Forwarder for ubuntu (debian 64 bit) is compatible with to splunk cloud verison 9.1.2308.203?