All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

But in local/authorize.conf this stanza is not there
Hi @splunklearner  In your authorize.conf file you have a stanza named [role_system_admin] remove the next two attributes: edit_roles_grantable = enabled grantableRoles = system_admin These lines... See more...
Hi @splunklearner  In your authorize.conf file you have a stanza named [role_system_admin] remove the next two attributes: edit_roles_grantable = enabled grantableRoles = system_admin These lines were required in the older versions of Splunk. Now however they are causing the issues you are seeing. Check out https://community.splunk.com/t5/Security/Users-missing-from-Access-Control/m-p/487058#M11170 for more info on this fix.      Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
We are having multiple roles created in Splunk restricted by their index and users will be added to this role via AD group and we use LDAP method for authentication.  Below is authentication.conf [... See more...
We are having multiple roles created in Splunk restricted by their index and users will be added to this role via AD group and we use LDAP method for authentication.  Below is authentication.conf [authentication] authType = LDAP authSettings = uk_ldap_auth [uk_ldap_auth] SSLEnabled = 1 bindDN = CN=Infodir-HBEU-INFSLK,OU=Service Accounts,DC=InfoDir,DC=Prod,DC=FED groupBaseDN = OU=Splunk Network Log Analysis UK,OU=Applications,OU=Groups,DC=Infodir,DC=Prod,DC=FED groupMappingAttribute = dn groupMemberAttribute = member groupNameAttribute = cn host = aa-lds-prod.uk.fed port = 3269 userBaseDN = ou=HSBCPeople,dc=InfoDir,dc=Prod,dc=FED userNameAttribute = employeeid realNameAttribute = displayname emailAttribute = mail [roleMap_uk_ldap_auth] <roles mapped with AD group created> Checked this post - https://community.splunk.com/t5/Security/How-can-I-generate-a-list-of-users-and-assigned-roles/m-p/194811 and try to give the same command -  |rest /services/authentication/users splunk_server=local |fields title roles realname |rename title as userName|rename realname as Name Given this in SH search, but hardly returning only 5 results but we have nearly 100 roles created. Even given splunk_server=*, still the same result. I am having admin role as well and I hope I have the needed capabilities. Not sure what am I missing here? Any thoughts?  
Hi @Narendra_Rao  If you’re looking for something for Splunk Cloud then check out https://www.splunk.com/en_us/blog/artificial-intelligence/unlock-the-power-of-splunk-cloud-platform-with-the-mcp-ser... See more...
Hi @Narendra_Rao  If you’re looking for something for Splunk Cloud then check out https://www.splunk.com/en_us/blog/artificial-intelligence/unlock-the-power-of-splunk-cloud-platform-with-the-mcp-server.html Having looked at the .conf25 sessions it sounds like there will be an official Splunk Enterprise MCP server released/announced then, for now it’s just cloud.   In the meantime, back in April I built https://github.com/livehybrid/splunk-mcp which I’ve been using with a couple of customers and currently testing a Splunk native app version which should be updated in GitHub soon.  Ultimately if you’re not in a hurry then it’s worth waiting to see what’s announced at Conf or using an existing open source version in the meantime.      Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification     Your feedback encourages the volunteers in this community to continue contributing.
I'm working on observability tooling and have built a MCP bridge that routes queries / Admin activities for splunk along with several other tools . How do i get if their is some existing MCP's bu... See more...
I'm working on observability tooling and have built a MCP bridge that routes queries / Admin activities for splunk along with several other tools . How do i get if their is some existing MCP's built already for splunk and move way ahead? Happy to collab!
I am using the same command and running as admin but getting only few users. We have nearly 50 but getting only 4-5. We use LDAP auth in our environment. Am I missing something? We create roles in de... See more...
I am using the same command and running as admin but getting only few users. We have nearly 50 but getting only 4-5. We use LDAP auth in our environment. Am I missing something? We create roles in deployer and push it to SHs 
livehybrid, wau. Absolut cool. Works fantastic. Thank you very much.
Hi @spisiakmi  You can add some HTML with CSS to a panel in your dashboard like this, note this only works for classic dashboards: <html> <style> .splunk-dropdown button, button span, .s... See more...
Hi @spisiakmi  You can add some HTML with CSS to a panel in your dashboard like this, note this only works for classic dashboards: <html> <style> .splunk-dropdown button, button span, .splunk-dropdown span, .splunk-dropdown label { font-size: 1.1em !important; } </style>    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi, can anybody help, how to change the font size of drop-down items/selections? Here is my dropdown: <input type="dropdown" token="auftrag_tkn" searchWhenChanged="true" id="dropdownAuswahlAuftra... See more...
Hi, can anybody help, how to change the font size of drop-down items/selections? Here is my dropdown: <input type="dropdown" token="auftrag_tkn" searchWhenChanged="true" id="dropdownAuswahlAuftrag"> <label>Auftrag</label> <fieldForLabel>Auftrag</fieldForLabel> <fieldForValue>Auftrag</fieldForValue> <search> <query>xxxxx</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input>  
Strictly speaking, "$searchCriteria$" is not the same as $searchCriteria|s$ as the |s filter will deal with things such as embedded quotes, whereas just putting the token in double quotes will not. H... See more...
Strictly speaking, "$searchCriteria$" is not the same as $searchCriteria|s$ as the |s filter will deal with things such as embedded quotes, whereas just putting the token in double quotes will not. Having said that, in this instance, they are probably equivalent.
hi @wjrbrady , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
After the Splunk Master enters maintenance mode, one of the indexers goes offline and then back online, and disables maintenance mode. The fixup tasks get stuck for about a week. The number of fixup ... See more...
After the Splunk Master enters maintenance mode, one of the indexers goes offline and then back online, and disables maintenance mode. The fixup tasks get stuck for about a week. The number of fixup tasks pending goes from around 5xx to 102 (after deleting rb bucket. I assume its the issue of bucket syncing in indexer cluster because client's server is a bit laggy(network delay, low cpu)) There are 40 fixup tasks in progress and 102 fixup tasks pending in the indexer cluster master. The internal log shows that all those 40 tasks are displaying the following error: Getting size on disk: Unable to get size on disk for bucket id=xxxxxxxxxxxxx path="/splunkdata/windows/db/rb_xxxxxx" (This is usually harmless as we may be racing with a rename in BucketMover or the S2SFileReceiver thread, or merge-buckets command which should be obvious in log file; the previous WARN message about this path can safely be ignored.) caller=serialize_SizeOnDisk Delete dir exists, or failed to sync search files for bid=xxxxxxxxxxxxxxxxxxx; will build bucket locally. err= Failed to sync search files for bid=xxxxxxxxxxxxxxxxxxx from srcs=xxxxxxxxxxxxxxxxxxxxxxx CMSlave [6205 CallbackRunnerThread] - searchState transition bid=xxxxxxxxxxxxxxxxxxxxx from=PendingSearchable to=Unsearchable reason='fsck failed: exitCode=24 (procId=1717942)' Getting size on disk: Unable to get size on disk for bucket id=xxxxxxxxxxxxx path="/splunkdata/windows/db/rb_xxxxxx" (This is usually harmless as we may be racing with a rename in BucketMover or the S2SFileReceiver thread, or merge-buckets command which should be obvious in log file; the previous WARN message about this path can safely be ignored.) caller=serialize_SizeOnDisk The internal log shows that all those 102 tasks are displaying the following error: ERROR TcpInputProc [6291 ReplicationDataReceiverThread] - event=replicationData status=failed err="Could not open file for bid=windows~xxxxxx err="bucket is already registered with this peer" (Success)"  Does anyone know what "fsck failed exit code 24" and "bucket is already registered with this peer" mean? How can these issues be resolved to reduce the number of fixup tasks? Thanks.  
Hi @sigma  The first thing you could try is adding a $ to the end of the REGEX so that the match is forced to run to the end of the line.  Secondly, are there any other extractions that could be ov... See more...
Hi @sigma  The first thing you could try is adding a $ to the end of the REGEX so that the match is forced to run to the end of the line.  Secondly, are there any other extractions that could be overlapping with this? Its just good to rule out the affects of other props.conf on your work! Also, instead of DEST_KEY=_meta you could try WRITE_META=true like below, although I dont think this would affect your extraction here: REGEX = ^\w+\s+\d+\s+\d+:\d+:\d+\s+\d{1,3}(?:\.\d{1,3}){3}\s+\d+\s+\S+\s+(\S+)(?:\s+(iLO\d+))?\s+-\s+-\s+-\s+(.*)$ FORMAT = name::$1 version::$2 message::$3 WRITE_META = true Have you defined your fields.conf for the indexed fields? Add an entry to fields.conf for the new indexed field: # fields.conf [<your_custom_field_name>] INDEXED=true  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Found the same file mysteriously auto-created and after a bit of tinkering found what caused its creation, at least in my case: splunk backup kvstore -pointInTime true -archiveName my_archive The f... See more...
Found the same file mysteriously auto-created and after a bit of tinkering found what caused its creation, at least in my case: splunk backup kvstore -pointInTime true -archiveName my_archive The file vanishes again once the process finishes. But if for some reason it crashes/gets killed/whatever, the file is left in the filesystem.
Hi @lokeshchanana  Case sensitivity of the token is certainly causing you an issue here, if the token you're setting is "searchCriteria" then your cannot use "searchcriteria". Also, you can use Tok... See more...
Hi @lokeshchanana  Case sensitivity of the token is certainly causing you an issue here, if the token you're setting is "searchCriteria" then your cannot use "searchcriteria". Also, you can use Token filters to add quotes around the token if you prefer, so | eval search_col = if($searchCriteria|s$ == "s_user", user, path) should also be the same as: | eval search_col = if("$searchCriteria$" == "s_user", user, path)  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @ewok  The Splunk Enterprise systemd unit (splunkd.service) is not shipped with the same AmbientCapabilities=CAP_DAC_READ_SEARCH line that the Universal Forwarder package adds to its own unit (Sp... See more...
Hi @ewok  The Splunk Enterprise systemd unit (splunkd.service) is not shipped with the same AmbientCapabilities=CAP_DAC_READ_SEARCH line that the Universal Forwarder package adds to its own unit (SplunkForwarder.service).  That line gives the UF process the Linux capability to bypass discretionary access controls (DAC) and read files such as /var/log/audit/audit.log even when the file is mode 0600 and owned by root. Enterprise installs simply omit that stanza, so splunkd runs with the default capability set and cannot open the audit log unless you relax the permissions or run Splunk as root (both STIG-failures). You can alter this behaviour for Splunk Enterprise (Full install) by editing the splunkd.service file: Create the override directory (run as root): systemctl edit splunkd.service Add theAmbientCapabilities: [Service] AmbientCapabilities=CAP_DAC_READ_SEARCH Reload systemd and restart Splunk: systemctl daemon-reload systemctl restart splunkd.service After the restart the running splunkd should have the same capability that the UF has and can read /var/log/audit/audit.log without touching file permissions or adding ACLs. The override should not be overwritten with Splunk package upgrades, but always verify after an upgrade that its still in place.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @ewok  The Splunk Enterprise systemd unit (splunkd.service) is not shipped with the same AmbientCapabilities=CAP_DAC_READ_SEARCH line that the Universal Forwarder package adds to its own unit (Sp... See more...
Hi @ewok  The Splunk Enterprise systemd unit (splunkd.service) is not shipped with the same AmbientCapabilities=CAP_DAC_READ_SEARCH line that the Universal Forwarder package adds to its own unit (SplunkForwarder.service). That line gives the UF process the Linux capability to bypass discretionary access controls (DAC) and read files such as /var/log/audit/audit.log even when the file is mode 0600 and owned by root. Enterprise installs simply omit that stanza, so splunkd runs with the default capability set and cannot open the audit log unless you relax the permissions or run Splunk as root (both STIG-failures). You can alter this behaviour for Splunk Enterprise (Full install) by editing the splunkd.service file: Create the override directory (run as root): systemctl edit splunkd.service Add theAmbientCapabilities: [Service] AmbientCapabilities=CAP_DAC_READ_SEARCH Reload systemd and restart Splunk: systemctl daemon-reload systemctl restart splunkd.service After the restart the running splunkd should have the same capability that the UF has and can read /var/log/audit/audit.log without touching file permissions or adding ACLs. The override should not be overwritten with Splunk package upgrades, but always verify after an upgrade that its still in place.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@cboillot  If you are using syslog-ng, it is preferable to use the host_segment option to extract the host value. This approach helps avoid potential future issues caused by changes in hostname nami... See more...
@cboillot  If you are using syslog-ng, it is preferable to use the host_segment option to extract the host value. This approach helps avoid potential future issues caused by changes in hostname naming conventions or logging patterns that might break regex-based extraction. You can configure the destination stanza in your syslog configuration file to include the device IP address dynamically in the log file path. Additionally, you can use the host_segment setting to extract the host value for indexing in Splunk. syslog-ng .conf file Eg: for destination stanza //Macro might be different if you are using rsyslog or any other destination d_device_logs { file("/var/log/syslog/$SOURCEIP/${YEAR}-${MONTH}-${DAY}.log"); }; And update inputs.conf with host_segment eg: [monitor:///var/log/syslog/...] host_segment = 4 But if you want to stick with regex extraction then use, props.conf [cisco:ise:syslog] TRANSFORMS-set_host = ise_host_override transforms.conf [ise_host_override] REGEX = ^\w+\s+\d+\s+\d+:\d+:\d+\.\d+\s+(\S+) FORMAT = host::$1 DEST_KEY = MetaData:Host Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Almost! Yes, using the correct case for the token is vital, but it is putting the token in double quotes which is also vital. Using case instead of if is not important. Token values are substituted ... See more...
Almost! Yes, using the correct case for the token is vital, but it is putting the token in double quotes which is also vital. Using case instead of if is not important. Token values are substituted as a text substitution into the code of the dashboard (whether Studio or SimpleXML) before the dashboard code is executed. For example, if the searchCriteria token from the dropdown had the value "s_user", the | eval search_col = if($searchCriteria$ == "s_user", user, path) line would become | eval search_col = if(s_user == "s_user", user, path) i.e. if the s_user field has the string value "s_user". Using double quotes | eval search_col = if("$searchCriteria$" == "s_user", user, path) gives the line the intended meaning | eval search_col = if("s_user" == "s_user", user, path)  
@lokeshchanana  Simple XML tokens are case-sensitive. You must use $searchCriteria$—matching the capitalization in your input name | eval search_col = if("$searchCriteria$"=="s_user", user, path) ... See more...
@lokeshchanana  Simple XML tokens are case-sensitive. You must use $searchCriteria$—matching the capitalization in your input name | eval search_col = if("$searchCriteria$"=="s_user", user, path) Also you can use case() with token substitution, | eval search_col = case("$searchCriteria$"=="s_user", user, 1==1, path) This forces Splunk to treat "$searchCriteria$" as a string literal and compare it properly.   Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!