All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Good day, First I want to say that this add-on is an absolute lifesaver when it comes to getting structured data into Splunk, and if you ever put it up on GitHub please let me know - I'd be happy to... See more...
Good day, First I want to say that this add-on is an absolute lifesaver when it comes to getting structured data into Splunk, and if you ever put it up on GitHub please let me know - I'd be happy to contribute. I have found a few minor issues.  I'll be using the following json in my examples:     {"total":52145,"rows": [ {"discoverable_guid":"94937859-A157-4C43-94AC-290172D50C4D","component_cpe":{"cpe23":"cpe:2.3:a:oracle:java_runtime_environment:1.8.0_381"},"cve":[]}, {"discoverable_guid":"2B933591-6192-4E42-9DFC-32C361D32208","component_cpe":{"cpe23":"cpe:2.3:a:oracle:jdk\\/sdk:1.8.0_201"},"cve":[]}, {"discoverable_guid":"DD854B8C-5900-518C-B8B6-096285936816","component_cpe":{"cpe23":"cpe:2.3:o:microsoft:windows_defender:4.18.1909.6"},"cve":[{"name":"CVE-2006-5270"},{"name":"CVE-2018-0986"},{"name":"CVE-2021-24092"},{"name":"CVE-2021-1647"},{"name":"CVE-2020-1170"},{"name":"CVE-2020-1163"},{"name":"CVE-2020-0835"},{"name":"CVE-2017-8558"},{"name":"CVE-2017-8541"},{"name":"CVE-2017-8540"},{"name":"CVE-2017-8538"},{"name":"CVE-2017-0290"},{"name":"CVE-2019-1255"},{"name":"CVE-2013-0078"},{"name":"CVE-2011-0037"},{"name":"CVE-2020-1461"},{"name":"CVE-2020-1002"},{"name":"CVE-2019-1161"},{"name":"CVE-2017-8542"},{"name":"CVE-2017-8539"},{"name":"CVE-2017-8537"},{"name":"CVE-2017-8536"},{"name":"CVE-2017-8535"},{"name":"CVE-2008-1438"},{"name":"CVE-2008-1437"}]}, {"discoverable_guid":"ADF7E72A-4A72-4D92-B278-F644E27EA88F","component_cpe":{"cpe23":"cpe:2.3:a:microsoft:.net_framework:4.8.04084"},"cve":[{"name":"CVE-2020-0646"},{"name":"CVE-2020-0606"},{"name":"CVE-2020-0605"},{"name":"CVE-2020-1147"},{"name":"CVE-2022-26832"},{"name":"CVE-2021-24111"},{"name":"CVE-2020-1108"},{"name":"CVE-2019-1083"},{"name":"CVE-2019-1006"},{"name":"CVE-2019-0981"},{"name":"CVE-2019-0980"},{"name":"CVE-2019-0820"},{"name":"CVE-2023-36873"},{"name":"CVE-2022-41064"},{"name":"CVE-2020-16937"},{"name":"CVE-2020-1476"},{"name":"CVE-2019-0864"},{"name":"CVE-2022-30130"}]}, {"discoverable_guid":"2B933591-6192-4E42-9DFC-32C361D32208","component_cpe":{"cpe23":"cpe:2.3:a:oracle:jdk\\/sdk:1.8.0_261"},"cve":[]} ]}     1. There are certain cases where nested json is rendered in splunk with  single quotes (') instead of double-quotes("): which makes me have to use a      | rex mode=sed field=<field_with_nested_json> "s/\'/\"/g"     to make it compatible with spath. 2. The "autoextract=0" option when pulling down json does not put the contents into a _raw field (as stated in your docs), but instead seems to do first-level extraction -  So a page that contains the following json:    EDIT - covered in #3 below Renders looking like this when I use getwatchlist json <url> autoextract=0 3.  the "dataKey" parameter All of the parameters seem to be case-sensitive - "dataKey=rows" produces correct content (below) vs "datakey=rows", which seems to ignore the parameter entirely 4. your docs don't seem to match the feature set or version in all places - Splunkbase "details" tab still refers to 1.3.2 Add-on "About" tab (after install) refers to 1.3.3, but does not include details of the url parsing features that can only be found in your release notes on Splunkbase 5. The flattenJson parameter does not seem to be working at all.  I find references to it in the code, but if I put it into the search as a parameter Splunk does not recognize it as such, but it also does not treat it as a custom field either. As I said above, this add-on is great work, and literally the only things I could ask for "extra" are maybe xml parsing, and being able to perhaps pass URL parameters as an array. EDIT: A little more testing made me realize that a lot of my problems are specific to capitalization of the command parameters.  I've edited #3 above
Hi @phanTom  Thank for your reply. On my connector,  there are some actions that are repeated a lot and having logs on them could flood the logs. I was hopping to add those logs only if customer ch... See more...
Hi @phanTom  Thank for your reply. On my connector,  there are some actions that are repeated a lot and having logs on them could flood the logs. I was hopping to add those logs only if customer chose to enable them. Why is the reason to have different levels of loggings if we cannot decide whether to print them or not? 
With which part of the task do you need help?  What have you tried so far?  Have to seen the Website Monitoring app (https://splunkbase.splunk.com/app/1493)?
The absolute best practice is to always explicitly define your timestamp along with timezone.  You can get away with it most times if you don't but when it doesn't work always revert back to a proper... See more...
The absolute best practice is to always explicitly define your timestamp along with timezone.  You can get away with it most times if you don't but when it doesn't work always revert back to a proper definition.
@jamie1 - Here is generic guidance that will boost your Splunk journey. For Data collection, you should look for Add-ons on Splunkbase. For dashboards, you should look for Apps on Splunkbase. Gen... See more...
@jamie1 - Here is generic guidance that will boost your Splunk journey. For Data collection, you should look for Add-ons on Splunkbase. For dashboards, you should look for Apps on Splunkbase. Generally, these Apps and add-ons work the same on On-prem vs Cloud. (And that's why you don't see much difference in documentation, as there isn't much difference)   Regarding this dashboards, this is something you can try - https://docs.splunk.com/Documentation/CPWindowsDash/latest/CP/About But if this is just regarding CPU performance, you can also try create your own based on your requirement.   I hope this helps!!! Kindly upvote if it does!!!
Hello everyone Two parter. First of all, am I correct in assuming that /appname/metadata/local.meta takes precedence over /appname/metadata/local.meta The reason for this question is that while ... See more...
Hello everyone Two parter. First of all, am I correct in assuming that /appname/metadata/local.meta takes precedence over /appname/metadata/local.meta The reason for this question is that while applying changes in a SH cluster from a deployer, changes made to default.meta in an app has no effect and changes in local.meta are retained. Second, what is the best praxis for editing the local.meta file? I think I, at least, need to completely remove the read/write permissions so it falls back on the default.meta file [] access = read : [ ], write : [ ] export = none version = x.x.x.x modtime = tttttttttt.tttttttt Otherwise any future edits made and rolled out will not take effect as there is a local.meta file. Though I was hoping that I could just delete the entire local.meta file? To be clear, the actual question, can I     a) delete the entire local.meta file or do I have to     b) edit out the desired section in the local.meta file? I know I can edit access to alerts/dashboards etc via the GUI though I'd like to edit for everything in the app in one single move from the CLI. All the best // f
Hi @AL3Z, if you don't want to manually manage Assets and Identities (I'm supposing that you are speaking of ES) the prerequisites is the availability ox external archives containing the company ass... See more...
Hi @AL3Z, if you don't want to manually manage Assets and Identities (I'm supposing that you are speaking of ES) the prerequisites is the availability ox external archives containing the company assets and identities that you can query using a script or an ldap query or a DB-Connect query. So having these archives, you have to query them saving resuts in an index and from the index in the predefined ES lookups. Obviously this job requires that you analyzed your data and the record set of ES Assets and Identities. Ciao. Giuseppe
@m_nouman - You need to use time_window for streamstats command, as suggested by @PickleRick  For timechart and bin command use span=10m   I hope this helps!!!
@Monstah54 - Have you tried using a different browser, most likely it could be browser cache issue.   I hope this helps!!!
Hi, Can someone please assist me in setting up assets and identity from the scratch, and what prerequisites are necessary for this? Thanks in advance.
@parthiban - You need to use a dependant Dropdown filter. Here is reference example - https://community.splunk.com/t5/Dashboards-Visualizations/How-to-create-multiple-dependent-dropdowns-on-a-dashboa... See more...
@parthiban - You need to use a dependant Dropdown filter. Here is reference example - https://community.splunk.com/t5/Dashboards-Visualizations/How-to-create-multiple-dependent-dropdowns-on-a-dashboard/m-p/391089   I hope this helps!!! Kindly upvote if this helps!!
I am working in Classic dashboard. I have a gateway address (URL: abc23.com ) I want to check this value after every dashboard refresh. Either display the results of the URL and/or single value visua... See more...
I am working in Classic dashboard. I have a gateway address (URL: abc23.com ) I want to check this value after every dashboard refresh. Either display the results of the URL and/or single value visual with green and red colors. Green is for when the URL status is set to "OK", else is "Red".  Any ideas on how I can accomplish this task?   I created a python scrip that extracts the value into a log and then the dashboard checks the log but this doesn't seem like the best approach and not really what I want. 
@emesabarrameda I can't seem to find anything in the docs: https://docs.splunk.com/Documentation/SOARonprem/6.2.0/DevelopApps/AppDevAPIRef  Both options you call out have the tag option which could ... See more...
@emesabarrameda I can't seem to find anything in the docs: https://docs.splunk.com/Documentation/SOARonprem/6.2.0/DevelopApps/AppDevAPIRef  Both options you call out have the tag option which could maybe be used for thee INFO/WARNING/TRACE strings? Any reason you want to Split into those categories as it all ends up in spawn.log anyway. 
I'm trying to create an admission rule in workload management with the following syntax: any search with "=*" in the index will return a predefined message. my intention is to block any search that... See more...
I'm trying to create an admission rule in workload management with the following syntax: any search with "=*" in the index will return a predefined message. my intention is to block any search that contains "=*" in any part of the index, such as: "index=splun*", "index=spl*", "index=_internal*", etc. I didn't find anything in the documentation that talked about it. Is there any way to create a general rule for this case?
@Carloszavala121 as @jenniandthebets said, the best way is to create your own app for this or modify the existing one to remove the need for base_url at the asset level and have it as a parameter for... See more...
@Carloszavala121 as @jenniandthebets said, the best way is to create your own app for this or modify the existing one to remove the need for base_url at the asset level and have it as a parameter for the GET action instead maybe.
Hello, thanks so how would you connect from Python using base url? We have HA/FO servers in front of Splunk servers.
@splunkreal - For the host you need to just IP address or hostname. The port should be a Splunk management port generally 8089 and not the Splunk UI port.   I hope this helps!!!
Hello All, Recently we have migrated all our indexes to Splunk Smartstore with our remote storage being Azure blob. After that we noticed several problems with our environment. Buckets being st... See more...
Hello All, Recently we have migrated all our indexes to Splunk Smartstore with our remote storage being Azure blob. After that we noticed several problems with our environment. Buckets being stuck in fixup state more often. Indexing queues being full (No major spike in data indexation). Huge increase in number of buckets. And the list goes on. We are considering to revert back to the persistent disk for data storage, however, looking at the Splunk documentation, it is not possible to revert back an index configured with Splunk Smartstore perisitent disk. But, I'm looking at a way, if it would be still possible to do it, because of the above issues, the search performance is abysmal. We have around 6 indexers and each indexer has around 800k buckets and the current data on remote storage (Smartstore) is 50 TB.   Are there any ways to migrate back to persistent disk? Looking forward to any gray methods to try out as well.   Thanks
Please advise on the optimal solution for this business task. I have a set of events with the following fields:     city: Almaty country: KZ latitude: 43.2433 longitude: 76.8646 region: Almaty ... See more...
Please advise on the optimal solution for this business task. I have a set of events with the following fields:     city: Almaty country: KZ latitude: 43.2433 longitude: 76.8646 region: Almaty     What would be the best approach to obtain the field indicating the local time of these events using the provided information?
Hi I am using something like this: index=_internal AND sourcetype=splunkd AND TERM(New) AND TERM(Old) AND TERM(properties) AND TERM(are) | rex field=_raw "Old properties are: \{ip=(?<old_ip>[^\ ... See more...
Hi I am using something like this: index=_internal AND sourcetype=splunkd AND TERM(New) AND TERM(Old) AND TERM(properties) AND TERM(are) | rex field=_raw "Old properties are: \{ip=(?<old_ip>[^\ ,]*)[\ ,]+dns=(?<old_dns>[^\ ,]*)[\ ,]+hostname=(?<old_hostname>[^\ ,]*)[\ ,]+deploymentClientName=(?<old_deploymentclientname>[^\ ,]*)[\ ,]+.*instanceId=(?<old_instanceid>[^\ ,]*)[\ ,]+instanceName=(?<old_instancename>[^\ ,\}]*)" | rex field=_raw "New properties are: \{ip=(?<new_ip>[^\ ,]*)[\ ,]+dns=(?<new_dns>[^\ ,]*)[\ ,]+hostname=(?<new_hostname>[^\ ,]*)[\ ,]+deploymentClientName=(?<new_deploymentclientname>[^\ ,]*)[\ ,]+.*instanceId=(?<new_instanceid>[^\ ,]*)[\ ,]+instanceName=(?<new_instancename>[^\ ,\}]*)" | stats latest(_time) AS ltm, count BY new_hostname, old_hostname, new_dns, old_dns, new_instancename, old_instancename, new_instanceid, old_instanceid, new_ip, old_ip | convert timeformat="%y-%m-%d" ctime(ltm) AS LastDay | fields - ltm