All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @talente  How long has it been since you requested the lab? Sometimes these take 10-15 minutes or more to start up. Is the lab URL on a specific port (e.g. 8000) and if so, can you access that p... See more...
Hi @talente  How long has it been since you requested the lab? Sometimes these take 10-15 minutes or more to start up. Is the lab URL on a specific port (e.g. 8000) and if so, can you access that port for other sites from your network? e.g. try http://portquiz.net:8000/ Which lab is it you are working on?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi I tried to set up the splunk lab but the url for instance is not working 
Hi @dompico  I assume that this is installed on a heavy forwarder within your environment? Please can you confirm how you've installed the app? It looks like the app is looking for authhosts.conf wh... See more...
Hi @dompico  I assume that this is installed on a heavy forwarder within your environment? Please can you confirm how you've installed the app? It looks like the app is looking for authhosts.conf which it cannot find.  The app doesnt ship with this file, so I presume its generated as part of the modular input when it runs.  Are there any other errors before this error relating to the retrieval of content from S1 that might be used to populate this conf file? Theres a similar thread at https://community.splunk.com/t5/All-Apps-and-Add-ons/sentinelone-app-no-longer-able-to-connect-to-sentinelone/m-p/692354  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
On the bug fix for this issue, Splunk Support have come back with the following ... Observation & Findings: Thanks for flagging this issue with us and we taken this to the development team. We i... See more...
On the bug fix for this issue, Splunk Support have come back with the following ... Observation & Findings: Thanks for flagging this issue with us and we taken this to the development team. We informed you that our development team is having high level discussions on the xpath command whether to deprecate it or enhance it. Once the xpath enhancement or deprecation is done, it will be updated in the official documentation. As this task will undergo through some pre-checks, post-checks and some approvals which might take some time. So workarounds are the only option, for now. Here's a more generic regex to extract different sorts of XML declarations (note, removes CDATA entries too) | ... ``` example: https://regex101.com/r/BqHeX4/3 ``` | eval xml=replace(_raw, "(?s)(\<[\?\!]([^\\>]+\>).+?)*(?=\<[^(?=\/)])(?=[a-zA-Z])*", "") | rex mode=sed field=_raw "s/(?s)(\<[\?\!]([^\\>]+\>).+?)*(?=\<[^(?=\/)])(?=[a-zA-Z])*//g" ``` sed example for a props.conf SEDCMD to remove XML declarations before indexing ``` | xpath ...   Finally, there is another bug (Splunk said they are aware) with the xpath command when it is used more than once.  Any existing multi-value fields become non multi-value fields (like a nomv command has been applied) so any mv manipulations should be done before subsequent xpath commands. 
Hello, I'm trying to get SentinelOne data into my cloud instance but I'm getting errors similar to this related to the inputs. At first I was having an issue with authentication errors using the API... See more...
Hello, I'm trying to get SentinelOne data into my cloud instance but I'm getting errors similar to this related to the inputs. At first I was having an issue with authentication errors using the API. I believe that's resolved after regenerating the key, because these are the only logs I can see in the index I created for S1. error_message="[HTTP 404] https://127.0.0.1:8089/servicesNS/nobody/sentinelone_app_for_splunk/configs/conf-authhosts/********?output_mode=json" error_type="&lt;class 'splunk.ResourceNotFound'&gt;" error_arguments="[HTTP 404] https://127.0.0.1:8089/servicesNS/nobody/sentinelone_app_for_splunk/configs/conf-authhosts/***********?output_mode=json" error_filename="s1_client.py" error_line_number="162" input_guid="*****************" input_name="Threats"
The splunk universal forwarder image currently is not compatible with OpenShift. The image architecture requires the usage of sudo to switch between users and to run as specific UIDs which are not co... See more...
The splunk universal forwarder image currently is not compatible with OpenShift. The image architecture requires the usage of sudo to switch between users and to run as specific UIDs which are not compatible with OpenShift UIDs. Are you planning to ever fix your image to make it compatible with OpenShift to run as a sidecar container?
Thank you for the quick and helpful reply. I figured that was probably the answer. In the meantime I'm working with the data owner at the origin to see if they can mitigate the issue on their end. Cl... See more...
Thank you for the quick and helpful reply. I figured that was probably the answer. In the meantime I'm working with the data owner at the origin to see if they can mitigate the issue on their end. Clearly something isn't right on the Azure client side and that'll need to be fixed. 
Hi @gazoscreek  No, the blacklist parameter in inputs.conf is not applicable for filtering event content collected by the Splunk_TA_microsoft-cloudservices add-on. The blacklist parameter is used f... See more...
Hi @gazoscreek  No, the blacklist parameter in inputs.conf is not applicable for filtering event content collected by the Splunk_TA_microsoft-cloudservices add-on. The blacklist parameter is used for file-based inputs (monitor, batch) to exclude files or directories based on their path. The Splunk_TA_microsoft-cloudservices collects data via APIs, not from files. I believe you're stuck with the Index time parsing option which you are already looking at. Would you be able to share you config for this? We may be able to find some performance improvements which might help? Also, what is your architecture like? If there is too much pressure on your HF to do these parsings then are there other Intermediary forwarders that you could do it on, or perhaps even the indexers? This falls into the "it depends" category a little as I dont have all the info, but there may be some options out there. Regarding the ingest_eval on another instance after the data has already been parsed on your HF, you can use RULESET- props.conf settings to call transforms - this is what Ingest Actions does to achieve transfoms on already parsed data.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Wondering if there's a blacklist parameter I can add to one of my Azure inputs so that Splunk will ignore pulling the event across the WAN. I already have a working ingest action, but the amount of d... See more...
Wondering if there's a blacklist parameter I can add to one of my Azure inputs so that Splunk will ignore pulling the event across the WAN. I already have a working ingest action, but the amount of data that's coming across is causing memory issues on my forwarder ... my working ingest action is this ... NETWORKSECURITYGROUPS\\/NSGBLAHBLAH.*?(IP\\.IP\\.IP\\.IP|IP\\.IP\\.IP\\.IP) but is there an inputs.conf parameter I can set to this regex so that the data will be ignored at the source.
Hi @Dallastek1  I think you might need GlobalReader/Security Reader Roles for this API call,  check out the following Microsoft docs relating to this https://learn.microsoft.com/en-us/previous-versi... See more...
Hi @Dallastek1  I think you might need GlobalReader/Security Reader Roles for this API call,  check out the following Microsoft docs relating to this https://learn.microsoft.com/en-us/previous-versions/office/developer/o365-enterprise-developers/jj984325(v=office.15)#:~:text=Assign%20Azure%20AD%20roles%20to%20the%20application In addition to the ReportingWebService.Read.All application permissions you have. The following spreadsheet can be good at determining which permissions are used by different Microsoft/Azure/O365 TAs and worth having bookmarked! https://docs.google.com/spreadsheets/d/1YJAqNmcXZU-7O9CxVKupOkR6q2S8TXriMeLAUMYmMs4/edit?gid=0#gid=0  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
should have all the required permission unless there is something specific we missed  
I have configured the microsoft 365 office 365, all are working except message trace. I rebuilt the input but still getting this error message when checking the internal logs. all other exchange mail... See more...
I have configured the microsoft 365 office 365, all are working except message trace. I rebuilt the input but still getting this error message when checking the internal logs. all other exchange mailbox data is coming in and all use the same acct.
The deployment client disabled similar issues was facing during the lab setup with 1 sh, 2 idx, 1 mc and 1 uf..  After all the trail which did not work, I placed the deploymentclinet.conf file in tw... See more...
The deployment client disabled similar issues was facing during the lab setup with 1 sh, 2 idx, 1 mc and 1 uf..  After all the trail which did not work, I placed the deploymentclinet.conf file in two location below which worked.  1-->/opt/splunk/etc/apps/all_deployment client 2-->/opt/splunk/etc/syste/local stanza below [deployment-client] disabled =  false clientName = example(mc,idx,uf/hf) # Change the targetUri to point to deployment server targetUri = deploymentserver.splunk.mycompany.com:8089          
Hello Splunk Community! Welcome to the June edition of Splunk Answers Community Content Calendar! Get ready for this week’s post dedicated to Splunk Dashboards! We're celebrating the power of com... See more...
Hello Splunk Community! Welcome to the June edition of Splunk Answers Community Content Calendar! Get ready for this week’s post dedicated to Splunk Dashboards! We're celebrating the power of community by sharing solutions for common dashboard challenges (like panel widths and time range configurations) and spotlighting the invaluable contributions of our Splunk users and experts on the Dashboards & Visualizations board. Dashboard CSS Width setup doesn't work anymore with 9.x version Upgrading to the latest Splunk version (9.x) can bring a host of improvements and new features. However, sometimes updates can introduce unexpected challenges. One issue that some Splunk users have encountered is related to custom CSS styling in classic XML dashboards, specifically affecting panel widths. This issue was brought to light by JulienKVT The Problem: Custom CSS Panel Widths No Longer Working Many Splunk administrators and developers rely on custom CSS to fine-tune the layout and appearance of their dashboards. A common use case is setting specific widths for panels within a row, allowing for a more tailored and visually appealing presentation of data. The intention of the code by JulienKVT is to set #Panel1 to 15% width and #Panel2 to 85% width within the row. However, after upgrading to Splunk 9.x, JulienKVT mentioned that this CSS styling no longer works as expected. Panels revert to their default layouts (e.g., 50/50, 33/33/33), ignoring the custom CSS rules. This issue can be frustrating for users who have carefully crafted their dashboards and rely on specific panel layouts for optimal data visualization. While Splunk's Dashboard Studio offers a more modern approach to dashboard creation, migrating existing dashboards can be a time-consuming and complex task. Many users need a solution that allows them to maintain their existing classic dashboards while still benefiting from the latest Splunk version. The Solution: Replace width with max-width in Your CSS Our contributor Paul_Dixon suggested a solution that involves a minor modification to your existing CSS code. Instead of using the width property, try using max-width instead. The key difference between width and max-width lies in how they're interpreted by the browser. width sets a fixed width for the element, while max-width sets the maximum allowed width. The element can be smaller than the max-width if other constraints apply. In the context of Splunk 9.x dashboards, it's possible that changes in the underlying layout engine are interfering with the width property. By using max-width, you're essentially giving the panel a hint about its desired size, while still allowing it to adapt to other layout constraints. The important tag ensures that this style takes precedence over other conflicting styles. Thanks to our contributor Paul_Dixon for providing a clear solution. Give it a try and let us know in the comments if it works for you! Thanks to the community for sharing this valuable tip! Splunk Dashboard: Combining Time Ranges Splunk dashboards are indispensable for visualizing and analyzing data. Often, you need to tailor your search queries to achieve the precise results you're after. This post was brought to light by Punnu. We will explore a common scenario: using different time ranges for different parts of a query and hiding specific columns from the output. But more importantly, we'll celebrate the power of the Splunk Answers community in finding solutions to even the most complex challenges. The Problem: Dynamic Time Ranges and Column Control Use a dashboard input (time picker) for the initial part of a search query. They wanted users to select a specific time range using a dashboard time picker. Run the remaining part of the query using a different time range (e.g., the entire day). The reasoning was that events triggered during the initial time range might be processed later in the day. Hide specific columns (e.g., message GUID, request time, output time) from the final displayed results. This simplifies the view and focuses on relevant information. The Solution (Partial): Dynamic Time Range Adjustment (and a Community Discovery!) While a complete solution requires a more complex setup, the user Punnu themselves discovered a clever technique for dynamically adjusting the time range based on the current time, thanks to an old post by somesoni2.  Punnu found a solution buried in the archives of Splunk Answers, a testament to the long-lasting contributions of expert users like somesoni2. The solution involves a subsearch to dynamically adjust the time range. There is a wealth of knowledge available on Splunk Answers and this highlights the incredible value of the Splunk community. The Power of Splunk Answers This solution exemplifies the power of the Splunk Answers community. Expert users have generously shared their knowledge and solutions over the years, creating a vast and invaluable resource. The fact that the user was able to find a working solution from 2017 demonstrates the enduring relevance of the information shared on Splunk Answers. Remember to leverage this incredible resource when facing your own Splunk challenges! The answer you need might already be waiting for you on Splunk Answers. Kudos to the Expert Users! We want to give a shout-out to the users like JulienKVT and Punnu who bring these questions to light and the countless expert users who contribute to Splunk Answers like Paul_Dixon and somesoni2. Their dedication to helping others and sharing their expertise makes the Splunk community a truly special place. Because of them, you can often find a solution to almost any Splunk challenge, no matter how complex. These unsung heroes are the backbone of the Splunk ecosystem. Would you like to feature more solutions like this? Reach out @Anam Siddique on Slack in our Splunk Community Slack workspace to highlight your question, answer, or tip in an upcoming Community Content post! Beyond Splunk Answers, the Splunk Community offers a wealth of valuable resources to deepen your knowledge and connect with other professionals! Here are some great ways to get involved and expand your Splunk expertise: Role-Based Learning Paths: Tailored to help you master various aspects of the Splunk Data Platform and enhance your skills. Splunk Training & Certifications: A fantastic place to connect with like-minded individuals and access top-notch educational content. Community Blogs: Stay up-to-date with the latest news, insights, and updates from the Splunk community. User Groups: Join meetups and connect with other Splunk practitioners in your area. Splunk Community Programs: Get involved in exclusive programs like SplunkTrust and Super Users where you can earn recognition and contribute to the community. And don’t forget, you can connect with Splunk users and experts in real-time by joining the Slack channel. Dive into these resources today and make the most of your Splunk journey!
Hi tried to follow by adding this to the url at the end %20tz=America%2FNew_York&output_mode=csv" and in the headers headers = { 'Authorization': 'Basic AuthKey','Content-Type':'text/csv','tz':'(... See more...
Hi tried to follow by adding this to the url at the end %20tz=America%2FNew_York&output_mode=csv" and in the headers headers = { 'Authorization': 'Basic AuthKey','Content-Type':'text/csv','tz':'(GMT-04:00) Eastern Time (US & Canada)' } this is not returning anything
Thank you @PrewinThomas, I'm looking at props.conf and transforms.conf of the TA and I don't see any references to last_seen and last_found. What am I missing?
Hi,
Yes, this covers both cases.  If the extra space is not present then sed does nothing.
Hi @PrewinThomas  Can you give more info on the itsi_get_service_health command you are referring to please?  This seems like a hallucination as I cannot find any reference to it online or in any I... See more...
Hi @PrewinThomas  Can you give more info on the itsi_get_service_health command you are referring to please?  This seems like a hallucination as I cannot find any reference to it online or in any ITSI versions I have.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Karthikeya  Are you running on-prem or in Splunk Cloud? This could be a number of things, including misconfiguration of limits.conf or a roles' concurrency settings but is typically caused by re... See more...
Hi @Karthikeya  Are you running on-prem or in Splunk Cloud? This could be a number of things, including misconfiguration of limits.conf or a roles' concurrency settings but is typically caused by resource constraints - put simply, there are more searches running than the system can handle. If you are on-prem head over to the Monitoring Console and look at the "Search Activity: Instance" and "Scheduler Activity: Instance" for more insight into what is going on. Does any specific searches/users stand out here?  What is the load on your Splunk infrastructure looking like? If you are running Splunk Cloud then head over to the Cloud Monitoring Console - See https://help.splunk.com/en/splunk-cloud-platform/administer/admin-manual/9.3.2411/monitor-your-splunk-cloud-platform-deployment/usethe-search-dashboards for more info on the dashboard which might help relating to the search performance.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing