All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, @jtacy .  A question, Is the file being changed from the C:\Program Files\SplunkUniversalForwarder\etc\system\local\”?   Thank very much. Regards.
please check the truncated event from syslog server  We are attempting to send logs to both the Splunk indexer and the syslog server because different teams handle distinct log types. My team ma... See more...
please check the truncated event from syslog server  We are attempting to send logs to both the Splunk indexer and the syslog server because different teams handle distinct log types. My team manages the system security logs specifically for SOC team monitoring.
Thanks so much.
Field aliases are specific to a sourcetype.  To have an alias for a field in two sourcetypes requires two aliases.
A user wants to create a new field alias for a field that appears in two sourcetypes. How many field aliases need to be created?One or two It should be one.Answer says two.Explain
Hi @sarvananth, Have you reviewed rsyslog documentation for maximum message length and line endings? If you're forwarding using a syslog output over UDP, the transport itself has a limit of 65,535 b... See more...
Hi @sarvananth, Have you reviewed rsyslog documentation for maximum message length and line endings? If you're forwarding using a syslog output over UDP, the transport itself has a limit of 65,535 bytes per datagram (subtract headers for maximum payload length). You may also want to transform the events by replacing line endings with an escape sequence of your choosing (or one required by the consumer).
Hi Splunkers, The origin of the problem was corrupted buckets. In my case 3 buckets were corrupted. This is what happens when analyst push some bad search request, and have killed the splunkd deamo... See more...
Hi Splunkers, The origin of the problem was corrupted buckets. In my case 3 buckets were corrupted. This is what happens when analyst push some bad search request, and have killed the splunkd deamon of some indexers up and running during the decommissinning of one of them. Check : https://docs.splunk.com/Documentation/Splunk/Latest/Troubleshooting/CommandlinetoolsforusewithSupport#fsck I used the command (under the indexer where the bucket is and this indexer as to be stopped too) : >>> splunk fsck repair [bucket_path] [index] (use a "find /indexes/path | grep bucket_uid$ | grep [index's bucket]" to find his path) That fsck confirm the problem. In my case, the problem was not repairable. So the decision have been made to delete these buckets. The data were old, and very small, so the decision was made to delete them. After that evrything went back to normal. Problem solved. Thanks for the help
What happens you run the following command from <your_stack_url>/app/splunk-app-sfdc/search: | inputlookup lookup_sfdc_usernames Do you see any results? Do you have any duplicate definitions of LO... See more...
What happens you run the following command from <your_stack_url>/app/splunk-app-sfdc/search: | inputlookup lookup_sfdc_usernames Do you see any results? Do you have any duplicate definitions of LOOKUP-SFDC-USER_NAME under Settings > Lookups > Automatic Lookups with App: All and Owner: Any? When you search against sourcetype=sfdc:loginhistory, do you still see errors? You can view search logs from Job > Inspect Job. In search.log, search for LOOKUP-SFDC-USER_NAME to see additional context. To view logs from indexers, add noop to your search: index=your_index sourcetype=sfdc:loginhistory | noop remote_log_fetch=*
The screenshot shows an untruncated event.  What makes you believe the logs are getting truncated?  Please show a sanitized sample truncated event. Why are the events going from a Splunk HF to a sys... See more...
The screenshot shows an untruncated event.  What makes you believe the logs are getting truncated?  Please show a sanitized sample truncated event. Why are the events going from a Splunk HF to a syslog server instead of to a Splunk indexer?
Hi, First of all, thanks for helping me for this issue. I tried all the things you say but I have the same error.  - The file input.conf on my UF don't permit to configure the interface (I verifie... See more...
Hi, First of all, thanks for helping me for this issue. I tried all the things you say but I have the same error.  - The file input.conf on my UF don't permit to configure the interface (I verified the input.conf.spec file for verification). - My kernel is updated so the problem It's not from It. -And for the version, after verification, I have the last version of UF and Add-On available on Splunk base. - For the  Community Resources, I found one link that relate to this type of problem but there is no answer. I put the link here if you are interested :  https://community.splunk.com/t5/Deployment-Architecture/streamfwd-app-error-in-var-log-splunk-streamfwd-log/m-p/675366#M27880 If you have more indications to fix my issue I will be very grateful to here it.  
Please share some anonymised sample events to show what you are working with
We are using Splunk Universal Forwarder (UF) to forward logs from a Windows server to a Splunk Heavy Forwarder (HF). However, when the Splunk HF receives logs of a specific type as multiline, an ... See more...
We are using Splunk Universal Forwarder (UF) to forward logs from a Windows server to a Splunk Heavy Forwarder (HF). However, when the Splunk HF receives logs of a specific type as multiline, an issue arises. In this case, when attempting to forward these logs from the Splunk HF to a syslog server (a Linux server with rsyslog configuration), the logs are getting truncated. How can we address and resolve this issue?
I want to write a query whose purpose is to print for users who are not authorized to enter, and of course with the presence of a lookup table, the people who are authorized to enter are present in it.
Addon is also installed.
Hi there, Here's a breakdown of potential issues and solutions: 1. Regex Accuracy: Double-check that the regular expressions (REGEX) accurately match your expected data patterns. Test them th... See more...
Hi there, Here's a breakdown of potential issues and solutions: 1. Regex Accuracy: Double-check that the regular expressions (REGEX) accurately match your expected data patterns. Test them thoroughly using online regex testers or Splunk's rex command. Ensure the source and sourcetype fields contain the correct values for extraction. 2. FORMAT Order: The FORMAT field should use $1 to reference the first captured group from the regex, not $environment. Here's the corrected format: FORMAT = complaince_int_front::@service_$1 3. Transform Order: If both transforms are applied to the same data, consider their order. The environment_extraction transform might overwrite the service_extraction if it runs first. Adjust the order in transforms.conf if needed. 4. props.conf: Verify that props.conf correctly sets the _MetaData:Index field for indexing. 5. Troubleshooting Steps: Review Logs: Examine Splunk's internal logs for errors or warnings related to transforms. Test with Sample Data: Isolate issues by manually running transforms on sample data using the | command. Enable Debugging: Set DEBUG = true in [transforms] for detailed logging. Additional Tips: Consider using Splunk's indextime command for more flexible index-time transformations. Consult Splunk's documentation for in-depth guidance on transforms and regular expressions. Remember: Test changes thoroughly in a non-production environment before deploying to production. Regularly review and update transforms to ensure they align with evolving data patterns. ~ If the reply helps, a Karma upvote would be appreciated
Hi there, Here's what I've gathered: Potential Reasons for Override: Consistency: OneIdentity might strive for consistent parameter naming across its apps and transforms, aligning with intern... See more...
Hi there, Here's what I've gathered: Potential Reasons for Override: Consistency: OneIdentity might strive for consistent parameter naming across its apps and transforms, aligning with internal conventions or broader Splunk best practices. Functionality: Specific features or integrations within the OneIdentity-Safeguard app might necessitate these parameter names for proper operation. Security Considerations: Potential security enhancements or data handling requirements could be driving the parameter name modifications. Next Steps: Consult Documentation: Thoroughly review the OneIdentity-Safeguard app's documentation for any explicit explanations regarding the parameter name changes. Reach Out to OneIdentity: If documentation doesn't provide clarity, engage OneIdentity's support or community forums for direct answers from experts. Adapt Searches: Adjust your existing Splunk searches and dashboards to accommodate the new parameter names (e.g., using ip instead of clientip). Additional Considerations: Customizations: If you've made custom modifications to the dnslookup transform, carefully review and update them to align with the new parameter names. Third-Party Apps: If you're using third-party apps that rely on the dnslookup transform, ensure compatibility with the updated parameter names. Key Points: It's crucial to understand the rationale behind such changes to ensure smooth integration with other apps and maintain data integrity. Collaboration with OneIdentity or their community can provide valuable insights and best practices. Proactive adaptation of searches and configurations will maintain the functionality of your Splunk environment. ~ If the reply helps, a Karma upvote would be appreciated
Hi @zymeworks , the only way to assign an index to an app is to upload a custom app, containing te indexes.conf file. Otherwise it isn't possible, but whay do you need this? Ita relevant in on-pre... See more...
Hi @zymeworks , the only way to assign an index to an app is to upload a custom app, containing te indexes.conf file. Otherwise it isn't possible, but whay do you need this? Ita relevant in on-premise installations because in this way you always know where's the indexes.conf file to manage it (eventually modifying it) or to port the app in another instance. But in Splunk Cloud it isn't so relevant because you can modify the index only by GUI. Ciao. Giuseppe
Hi there, Here's a breakdown of the issue and potential solutions: Understanding the Issue: Base Search and Submit Button: When using a base search within a form with submitButton="true", dep... See more...
Hi there, Here's a breakdown of the issue and potential solutions: Understanding the Issue: Base Search and Submit Button: When using a base search within a form with submitButton="true", dependent inputs won't automatically refresh when tokens they rely on change. This is because the base search is only executed when the "Submit" button is clicked. Solutions: Trigger Base Search Manually: Add a change event handler to the time picker input using JavaScript. Inside the handler, manually trigger a search for the dependent dropdown using splunkjs.mvc.Components.getInstance("service_name_token").startSearch(). Separate Searches for Dependent Inputs: Remove the base search from the dependent dropdowns. Reintroduce separate searches for each, ensuring they use tokens from the time picker and any other relevant inputs. This will trigger searches automatically when tokens change. Consider Alternatives to Submit Button: If immediate updates are crucial for all inputs, explore removing the submit button and relying on automatic token-based searches. If a final submit action is still needed, use a separate button or trigger. Additional Recommendations: Review Token Dependencies: Double-check that tokens are correctly referenced in dependent searches. Test Thoroughly: Implement changes in a testing environment before deploying to production. Consult Documentation: Refer to Splunk's documentation for more details on base searches and form behavior: <invalid url documentation splunk ON docs.splunk.com> Choose the solution that best aligns with your dashboard's requirements and desired user experience. ~ If the reply helps, a Karma upvote would be appreciated
Hi there, While "click.name2" doesn't have a direct equivalent, here are effective approaches: 1. Use "row.name2" for Single Values: If you're accessing a single value from a clicked row in a... See more...
Hi there, While "click.name2" doesn't have a direct equivalent, here are effective approaches: 1. Use "row.name2" for Single Values: If you're accessing a single value from a clicked row in a table, use "row.name2" instead of "click.name2". Ensure the field name matches exactly (case-sensitive). 2. Employ Context Variables for Complex Data: For more complex data passing, leverage context variables: In the source dashboard's search, set a context variable: set context=name2="value" In the target dashboard, access the value using <span class="math-inline">context\.name2</span>. 3. Consider URL Tokens for Cross-Dashboard Linking: If you're linking to a different dashboard, use URL tokens like ?form.field1=<span class="math-inline">name2</span> in the link's URL. Additional Tips: Double-check field names for accuracy and capitalization. Ensure both dashboards share the same search context if using context variables. Consult Splunk documentation for more details on token usage and context variables: <invalid url documentation splunk ON docs.splunk.com> If you're still facing issues, provide more information about your dashboard structure and specific use case for tailored guidance. Remember: Token syntax differs between Simple XML and Dashboard Studio, so understanding these differences is crucial. Experiment with different approaches to find the best fit for your specific needs. ~ If the reply helps, a Karma upvote would be appreciated
Hi there, While Studio doesn't directly support prefix, suffix, and delimiter the same way, here are some workarounds using search queries: 1. Concatenate Strings: Use the concat or mvexpand fu... See more...
Hi there, While Studio doesn't directly support prefix, suffix, and delimiter the same way, here are some workarounds using search queries: 1. Concatenate Strings: Use the concat or mvexpand functions in your search query to combine desired elements (prefix, value, suffix, delimiter) into a single field. Example: index=_internal | search name="myMetric" | eval combinedValue=concat("prefix_", value, "_suffix", "|") 2. Leverage Panel Formatting: Customize panel formatting options like titles, labels, and tooltips to display combined values as needed. 3. Utilize Calculated Fields: Create calculated fields in your search query to pre-process data and ensure the desired format within the panel. 4. Consider Panel Types: Explore different panel types in Studio that might natively support your formatting needs (e.g., single value panels, charts with custom labels). 5. Reference Older Formats: In Studio, you can still reference and embed panels from your old dashboard format, providing some continuity while exploring new features. Remember: Adapt the specific solution based on your dashboard's unique requirements and desired output format. Experiment with different approaches and panel configurations to find the best fit for your use case. ~ If the reply helps, a Karma upvote would be appreciated