All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @danielbb  As far as I know there arent any public SPL searches that can determine the SVC usage based on on-premise ingest based licensing, however your Splunk account team usually work through ... See more...
Hi @danielbb  As far as I know there arent any public SPL searches that can determine the SVC usage based on on-premise ingest based licensing, however your Splunk account team usually work through the Splunk Cloud Migration Assessment with customers prior to migration to Splunk Cloud and this should help shape the environment. Id suggest speaking to your account team to see if there are any searches they can share to give you this overview.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @shaunm001  Is this on-premise? Do you have any firewalls between your Splunk server and the internet that could be causing an issue?  Given that they both stopped together Im leaning towards ei... See more...
Hi @shaunm001  Is this on-premise? Do you have any firewalls between your Splunk server and the internet that could be causing an issue?  Given that they both stopped together Im leaning towards either connectivity or expired cert, since you've tried a new secret already Im leaning towards connectivity!  There may be other log levels reporting errors, or different fields. I think O365 app uses "level=Error" for example. Have a look at the following as a starting point index=_internal sourcetype="splunk:ta:o365:log" Does this show any issues around connectivity, 401/400 errors or anything like this?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello, two of our Splunk apps "Splunk Add-on for Microsoft Cloud Services" and "Splunk Add-on for Office 365" are no longer collecting data. It looks like they stopped working June 30. I checked the ... See more...
Hello, two of our Splunk apps "Splunk Add-on for Microsoft Cloud Services" and "Splunk Add-on for Office 365" are no longer collecting data. It looks like they stopped working June 30. I checked the client secret in the Azure App Registrations panel and it had not expired. I went ahead and created a new key anyway and updated the two Splunk app configurations with the new key, but they still aren't collecting any data. I checked index="_internal" log_level=ERROR but didn't really see anything that stood out specific to these apps. Any suggestions on settings I can check, other logs to examine, etc?
As we prepare to transition our Splunk deployment to the cloud, we are aiming to estimate the Splunk Virtual Compute (SVCs) that may be incurred during typical usage. Specifically, we are interested ... See more...
As we prepare to transition our Splunk deployment to the cloud, we are aiming to estimate the Splunk Virtual Compute (SVCs) that may be incurred during typical usage. Specifically, we are interested in understanding how to best calculate on-prem SVC usage using data available from the _audit index, or any other recommended sources. Our primary focus is on dashboard refreshes, as they represent a significant portion of our ongoing search activity. We’re looking for guidance on any methodologies, SPL queries, or best practices that can help us approximate SVC consumption in our current environment to better forecast usage and cost implications post-migration.
Hi @sunnykhatik1019  Other than the single query limit you mentioned, there are no publicly documented historic retention details regarding TruStar. I'd recommend reaching out to support and/or your... See more...
Hi @sunnykhatik1019  Other than the single query limit you mentioned, there are no publicly documented historic retention details regarding TruStar. I'd recommend reaching out to support and/or your Splunk account team who may be able to dig into this a little further for you and get proper confirmation/clarification.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Sankar  Hopefully I understand what you're asking here, you're looking to onboard Confluence Audit Logs into your Splunk Cloud environment? Is your Confluence on-premise or their cloud SaaS off... See more...
Hi @Sankar  Hopefully I understand what you're asking here, you're looking to onboard Confluence Audit Logs into your Splunk Cloud environment? Is your Confluence on-premise or their cloud SaaS offering? If you are hosting Confluence on-premise then you can use a Splunk Universal Forwarder to send logs from the server using the details on the Confluence docs page to help: https://confluence.atlassian.com/doc/audit-log-integrations-in-confluence-1005333794.html If you are using their cloud service (e.g. yourCompany.atlassian.net) then you will need to use an administrator account in order to pull the logs, this is a restriction from Atlassian and not something that Splunk is able to workaround (see https://support.atlassian.com/confluence-cloud/docs/view-the-audit-log/) Have you seen the Confluence Cloud Audit Log Ingestor app for pulling the audit logs using the API? I believe this will need the admin level scoped auth token. In terms of documentation justifying the elevated access and risk assessment, unfortunately this is an Atlassian control but it might be worth reaching out to any Atlassian support you have for help with this.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi community I've been pulling my hair for quite some time regarding field extraction using the Splunk_TA_nix app. One thing that has been annoying me is the absence of a field which contains the fu... See more...
Hi community I've been pulling my hair for quite some time regarding field extraction using the Splunk_TA_nix app. One thing that has been annoying me is the absence of a field which contains the full command executed. My question/comment which I seek to get some feedback on. While trying to figure out why I am not seeing the expected/desired content I noticed something. Splunk_TA_nix/default/props.conf [linux_audit] REPORT-command = command_for_linux_audit Splunk_TA_nix/default/transforms.conf [command_for_linux_audit] REGEX = exe=.*\/(\S+)\" FORMAT = command::$1 This regex only applies to the "type=SYSCALL" audit log entry which is the only one containing "exe=" and it does not work in our environment. There is no trailing quotation mark in our log so this field is not properly extracted with this regex. So to work as intended this would need to be changed to [command_for_linux_audit] REGEX = exe=.*\/(\S+) FORMAT = command::$1 This would generate a field called "command" with the executed command (binary) only. Is this just in our environment where we have a make-shift solution to generate a second audit log file for collection, or is this a general issue? And the rant: It seems that if not defined elsewhere the default field separator is space. This means that most <field>=<value> entries in the audit log are extracted . The sourcetype=linux_audit type= PROCTITLE events actually has a field called "proctitle" which contains the full command executed. While a field called "proctitle" is extracted the value of this field is cut short after the first space, meaning only the command (binary) is available. Assuming this is expected behaviour, I suppose that I have to define a field extraction overwriding the "default" behaviour to get a field "proctitle" with the desired content.
Hey, thanks for the suggestion, and based on the debug output, it appears Splunk is "seeing" the configuration (see below), why it's not changing anything is the issue. C:\Program Files\Splunk\etc\s... See more...
Hey, thanks for the suggestion, and based on the debug output, it appears Splunk is "seeing" the configuration (see below), why it's not changing anything is the issue. C:\Program Files\Splunk\etc\system\local\commands.conf is_risky = false For the moment, we added the following to web.conf to suppress the warnings, but it's not an optimal situation. We'd definitely prefer to flag individual commands based on our usage. enable_risky_command_check = false enable_risky_command_check_dashboard = false  Any thoughts why Splunk would be ignoring the configuration?
Could you please advise Is there any Splunk Cloud security policy or best practice guidance on onboarding external data sources when the integration requires admin-level permissions at source? Doe... See more...
Could you please advise Is there any Splunk Cloud security policy or best practice guidance on onboarding external data sources when the integration requires admin-level permissions at source? Does Splunk recommend or require any formal risk review or CCSA-like process for such cases? Do you have any documentation or recommendations to share with us to justify this elevated access for log collection? Any alternatives or Splunk add-ons/plugins that could achieve the same without needing admin-level permissions?
@livehybrid  Thanks for the response. Your solution worked well for me.  I was able to use in my usecase.  One question I have now is , how do I use mstats. Usually we use like (example) | mstats... See more...
@livehybrid  Thanks for the response. Your solution worked well for me.  I was able to use in my usecase.  One question I have now is , how do I use mstats. Usually we use like (example) | mstats avg(cpu.utlization) as avg where index=<indexvalue> Here how can I use ? Regards, PNV 
Thanks @livehybrid , I appreciate the confirmation that this isn't as striaghtforward as it looks like it should be. I don't understand why they have this obvious feature which *looks* like it should... See more...
Thanks @livehybrid , I appreciate the confirmation that this isn't as striaghtforward as it looks like it should be. I don't understand why they have this obvious feature which *looks* like it should be easy to use, only it doesn't work.  When you got it working in the inline view, are you using a classic or dashboard studio dashboard? I originally tried this with dashboard studio and couldn't even get it to display at all - it just throws errors in the mobile client and on the gateway. My inline dashboard (even set to classic) won't work either - it just never seems to pick up the token required to get it to display any data. I'm going to experiment with a couple more things and then maybe open a ticket and just figure out a work-around. Unless I come up with something better I'll mark your post as the solution.
The requirement is ,get the exact DB server name without dbmon:xxx  & port:xxxx information.
Hi Experts,   Scenario: I have DB agents installed on standalone VMs for a group of DB servers & get connected using the DB agent VM. In event notification, the actual DB server name is coming in th... See more...
Hi Experts,   Scenario: I have DB agents installed on standalone VMs for a group of DB servers & get connected using the DB agent VM. In event notification, the actual DB server name is coming in the below format. "dbmon:11432|host:xxxxxxxagl.xxxxxx.com|port:1433" Is there any way I can customize this using AppD placeholders in JSON payload? I tried "${event.db.name}" & "${event.node.name}", but it's not working.  Appreciate your inputs. Thanks, Raj
Subject: Trustar API : Data Retention Policy Inquiry Dear Splunk Community, We are currently utilizing your search_indicators API, as documented here: https://docs.trustar.co/api/v13/indicators/sea... See more...
Subject: Trustar API : Data Retention Policy Inquiry Dear Splunk Community, We are currently utilizing your search_indicators API, as documented here: https://docs.trustar.co/api/v13/indicators/search_indicators.html. While we understand that the API supports a maximum time range of 1 year per query, we require clarification on the overall data retention policy for indicators. I just want to know the total historical period for which indicator data is stored and retrievable via this API, regardless of the single query window limit? Your insight into this would be greatly appreciated for our data strategy. TruSTAR 
Hi @Wiessiet  I went down a rabbithole trying to get this to work last night but unfortunately didnt end up with any progress. Ultimately I dont think it is possible to pass params to the dashboard ... See more...
Hi @Wiessiet  I went down a rabbithole trying to get this to work last night but unfortunately didnt end up with any progress. Ultimately I dont think it is possible to pass params to the dashboard via the "View Dashboard" link.  I did manage to get the token working in the dashboard display which appears when you click the alert, but I assume you're wanting it to follow into the View Dashboard button?  I agree with you re the docs - not a lot of info or examples!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @sureshmani04  hope you got the solution.. if so, could you pls mark the question as solved, so it will be moved from unanswered to solved. thanks. karma / upvotes are always welcomed, thanks. 
To use a lookup to enrich a search, the lookup needs to exist as a lookup on the Search Head A lookup on a heavy forwarder is not going to be available at search time.  What you need to do is get... See more...
To use a lookup to enrich a search, the lookup needs to exist as a lookup on the Search Head A lookup on a heavy forwarder is not going to be available at search time.  What you need to do is get a copy of the lookup on the SH. The easiest (imo) option is to index the lookup file on the HF - simply define it as an input on the HF and have Splunk monitor it for changes. You can send this to any index, but lets assume you create and use one called "lookups_index" and sourcetype "my_hf_lookup" On your search head, you can now create a lookup-generating search: Depending on what your lookup contains (dates, product_ids, error codes) you would create a search like: index=lookups_index soucetype=my_hf_lookup |dedup product_code |table product_code product_description product_price |outputlookup my_sh_lookup.csv I like to name these something like: "LOOKUPGEN-my_sh_lookup.csv" You can then schedule that to run once a day/week/hour (depending on your anticipated lookup change frequency) You can then use: |lookup my_sh_lookup.csv product_code OUTPUT product_name product_price In your searches - Although I find it better practice to actually create a lookup definition and use that 
Indeed, you're right. | makeresults | eval a=mvappend("","") | eval c=mvcount(a) And we have c=2.  That's funny. One should really deliberately choose some non-existant field.
thanks for your reply I will contact the support for this bug
yeah, but for now just want to know if it able to disable from the stream conf. I know its better for the full visibility, but again beside because the license limits, also want to know the posibilit... See more...
yeah, but for now just want to know if it able to disable from the stream conf. I know its better for the full visibility, but again beside because the license limits, also want to know the posibilities.