yannK's Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

yannK's Topics

We setup splunkd to autostart using systemd. -> https://docs.splunk.com/Documentation/Splunk/latest/Admin/RunSplunkassystemdservice but when the linux server reboot, we did no see Splunkd startin... See more...
We setup splunkd to autostart using systemd. -> https://docs.splunk.com/Documentation/Splunk/latest/Admin/RunSplunkassystemdservice but when the linux server reboot, we did no see Splunkd starting, we had to manually start it.
  ITSI menus send the users to "suite_redirect" page, that also fails to load with shows "oops" for non admin users Usually after an ITSI Upgrade (observed on 4.9 and later), on a Search-Head cluster.
Since upgrading to ITSI 4.9, the app reverted to the free version "ITEssential Work" (ITE-W) and most premium features are gone. This is because the ITSI license is now mandatory, or that the license... See more...
Since upgrading to ITSI 4.9, the app reverted to the free version "ITEssential Work" (ITE-W) and most premium features are gone. This is because the ITSI license is now mandatory, or that the license-master was not properly upgraded.
I have SCK setup, and collect my Kubernetes metrics. We have access out of the box to the node memory limit kube.node.memory.allocatable (in MB), and to the memory usage kube.node.memory.working_set_... See more...
I have SCK setup, and collect my Kubernetes metrics. We have access out of the box to the node memory limit kube.node.memory.allocatable (in MB), and to the memory usage kube.node.memory.working_set_bytes (in bytes) but we want to do some calculations to get the memory usage percentage per node.
If I want to install the new IT Essential Work app : https://splunkbase.splunk.com/app/5403/ But when installing from the UI, the .spl upload returns an error. This is because the App is indeed a p... See more...
If I want to install the new IT Essential Work app : https://splunkbase.splunk.com/app/5403/ But when installing from the UI, the .spl upload returns an error. This is because the App is indeed a package with several apps, that have to be installed manually on the file system, like ITSI.  
I have SAI/ Splunk App for Infrastructure automatically detecting new entities. But as I use VMs and Containers, it detects many entities that have a short lifetime, usually they deleted after a day... See more...
I have SAI/ Splunk App for Infrastructure automatically detecting new entities. But as I use VMs and Containers, it detects many entities that have a short lifetime, usually they deleted after a day or 2. In the UI it tags them as state=inactive, because there are no new events. I would like to remove them from SAI a few days after they go inactive, to clean up. Can it be done and can it be automated ?
I have a correlation search creating notable events. In the index=itsi_tracked_alerts, I see one event for a given event_id. But on the Episode review, I see the event being member of several Ep... See more...
I have a correlation search creating notable events. In the index=itsi_tracked_alerts, I see one event for a given event_id. But on the Episode review, I see the event being member of several Episodes index=itsi_grouped_alerts , comparing event_id and itsi_group_id This is happening randomly. I see the dashboard on the ITSI healthcheck, that show me the multiple grouping. What can cause that?
I saw several questions about the user "nobody", and would like to get a clear explanation of the meaning and implication. In the UI in the search manager, I see sometimes saved searches with th... See more...
I saw several questions about the user "nobody", and would like to get a clear explanation of the meaning and implication. In the UI in the search manager, I see sometimes saved searches with the owner "nobody" In the disk on $SPLUNK_HOME/etc/users I do not see a "nobody" profile folder In the apps, under local.meta or defaut.meta, for those shared searches (from the UI), I do not see an ownership like "owner = nobody" If I have an user that left, and was deleted, should I change the ownership of the objects to "nobody" ? I tried to create an user "nobody" in my local splunk users, but the manager refused, as it's a reserved name. I am using LDAP/SAML, I saw errors in the splunkd.log authentication about the user "nobody" not being found. Here are some of the answers I saw, but they are too specific: https://answers.splunk.com/answers/200590/what-are-splunk-system-user-and-nobody.html https://answers.splunk.com/answers/678324/issue-with-usernobody-with-ldap-authentication.html https://answers.splunk.com/answers/425941/what-does-nobody-under-owner-column-signify-in-spl.html
I upgraded to ITSI 3.0.2, and started to see a warning about duplicates entities. It seems that those were there before, the only new thing is that daily warning. Please find below the methods u... See more...
I upgraded to ITSI 3.0.2, and started to see a warning about duplicates entities. It seems that those were there before, the only new thing is that daily warning. Please find below the methods used to troubleshoot the duplicates entities. What are those duplicates ? They are entities that were created separately, but happen to have overlapping aliases. The consequence is that when ITSI filters then entities, more than 1 entity may match the filter. So you can end up with a miscalculated service average score (as extra entities are counted). Or you can end up with a search picking the first found duplicate, and ignoring the others. Having duplicates entity aliases can cause the KPI to not calculate properly the values, so it is critical to have a good entity hygiene to keep your services working. How can I detect those duplicates ? The UI warning since ITSI 3.0.2 Scan your entities manually Run a manual search | inputlookup itsi_entities | eval original='identifier.values' | mvexpand original | eval key=_key | stats count values(identifier.values) AS entity_aliases values(title) AS entity_title values(key) AS entity_key values(services._key) AS service_keys by original | eval error=if(count>1,"dupe","") | where count>1 How did I end up with duplicates ? Usually you have duplicates entities when the same entity was imported several time from different methods with different fields. Entity manually imported from a search or a CSV, with particular fields for the entity title, and the entity aliases. With different values for the fields, or a case difference. Entity automatically imported by a specific module (like the virtual module and the os module), but each time with a slightly different name as title or alias, or has a race conditions (and 2 modules detected the same entity at the same time, and were not able to identify it) this bug ITSI-830 is fixed in ITSI 4.0.0 http://docs.splunk.com/Documentation/ITSI/4.0.0/ReleaseNotes/Fixedissues A mix of both The differences maybe that the title and alias are different (short host name, or FQDN, small caps name or all caps names, ....). The goal of the aliases was to handle those situations, but it may not work if the pre-existing entity did not had all the proper aliases field setup before. Solutions to clean up and avoid duplicates. Always do a backup before (ITSI > configurations > backup/restore) If the issue was caused by autoimport, disable the autoimports (ITSI app> settings > data inputs > IT Service Intelligence Asynchronous CSV Loader, then disable the appropriate inputs) later you can retry the autoimport after you completed the clean up of the entities, and normalized the fields and aliases. Merge the duplicates, and move the fields that differs into one entity, then delete the extra one. Ultimately, you want to test your entity imports searches to ensure there will be no conflicts. If a import is done well, the import script is able to identity that one entity already exists with a similar alias field, and avoid doing a double import. Example of duplicates situations Example 1: title = mysql-01 alias : host= mysql-01 datacenter= moonracker info : itsi_role=operating_system_host vendor_product=unix.version and title = nagios-01 alias : host= nagios-01 datacenter= moonracker info : itsi_role=operating_system_host vendor_product=unix.version The field "datacenter" was used as an alias. While it should have been used as an info field. As a consequence, the alias "mooracker" may cause confusions between entities if used as a filter for a service. Solution : Move the field datacenter to an info field Example 2 : title = appserver-01 alias : host= appserver-01 info : itsi_role=operating_system_host vendor_product=unix.version EOL=2020-02-12 and title = appserver-01.buttercup.com alias : host= appserver-01.buttercup.com info : itsi_role=operating_system_host vendor_product=unix.version The title and alias are using different versions of the host, one is a short name the other a long FQDN. Pick one entity to merge on, add the title and aliases of the others to it add all the info fields to it Then delete the extra copy. title = appserver-01.buttercup.com alias : host= appserver-01.buttercup.com, appserver-01 info : itsi_role=operating_system_host vendor_product=unix.version EOL=2020-02-12 Example 3 : title = webserver-02 alias : host= webserver-02 info : itsi_role=operating_system_host vendor_product=unix.version and title = WEBSERVER-02 alias : host= WEBSERVER-02 info : itsi_role=virtual_host vendor_product=unix.version The title and alias are on different case. This will not be detected by the script but could be considered as a duplicate situation. and also we can tell from the itsi_role that they were detected by different modules (OS and Virtual) Solution : Pick one entity to merge on, add the title and aliases of the others to it add all the info fields to it Then delete the extra copy. title = WEBSERVER-02 alias : host= WEBSERVER-02,webserver-02 info : itsi_role=virtual_host,operating_system_host vendor_product=unix.version Example 4 title=webserver02 Alias: host=web02 id=web02 In a single entity, 2 aliases have the same value, this will trigger the ITSI migration check to fails. Solution : Remove one of the alias, or make it an info field. title=webserver02 Alias: host=web02 info: id=web02
I get errors on my instance each time I start it, the app has invalid settings. (version 1.4.3) It looks like those are just missing spec files. For the app's authors : Can you fix that in the n... See more...
I get errors on my instance each time I start it, the app has invalid settings. (version 1.4.3) It looks like those are just missing spec files. For the app's authors : Can you fix that in the next version, thanks.
I did an upgrade of my ITSI to 3.0, and in the process I saw some errors in the itsi_migration.log 2017-10-23 09:53:36,941 INFO [itsi.migration] [base_migration_interface] [_get_object_file_list]... See more...
I did an upgrade of my ITSI to 3.0, and in the process I saw some errors in the itsi_migration.log 2017-10-23 09:53:36,941 INFO [itsi.migration] [base_migration_interface] [_get_object_file_list] [23596] obtain the local storage target file list: ['D:\\Splunk\\var\\itsi\\migration_helper\\kpi_base_search___0.json'] 2017-10-23 09:53:41,783 ERROR [itsi.migration] [migration] [migration_bulk_save_to_kvstore] [23596] [HTTP 400] Bad Request; [{'type': 'ERROR', 'text': 'Parameter "name" must be 100 characters or less.', 'code': None}] Now the service panel does not load, and I had to rollback to ITSI 2.6.*
I want to use Volumes in indexes.conf to limit the space used by my indexes. On each index, I see 4 paths : homePath / coldPath / thawedPath / tstatsHomePath the last one seems to be used for t... See more...
I want to use Volumes in indexes.conf to limit the space used by my indexes. On each index, I see 4 paths : homePath / coldPath / thawedPath / tstatsHomePath the last one seems to be used for the accelerated datamodels or report accelerations. How does this works ? I noticed that they are several paths possible, and some of them (the summary) are already using volumes, that happen to point on the default $SPLUNK_DB path. Does a volume considers the other folders that not managed by splunk Does a volume considers the other folder in the same location if the use paths (instead of volumes) ?
I want to run splunk on linux on a cluster as non root user, I found several ways to change the user. ( boot-start, the init.d/splunk service, the splunk-launch.conf ) What are the advantages of... See more...
I want to run splunk on linux on a cluster as non root user, I found several ways to change the user. ( boot-start, the init.d/splunk service, the splunk-launch.conf ) What are the advantages of each method, and the behavior with restarts, service restart and rolling restarts ?
I collect my vpc logs using the aws addon : sourcetype=aws:cloudwatchlogs:vpcflow index=myvpclogs I can see the data in my index. but my dashboards in the aws app on the vpc logs do not pop... See more...
I collect my vpc logs using the aws addon : sourcetype=aws:cloudwatchlogs:vpcflow index=myvpclogs I can see the data in my index. but my dashboards in the aws app on the vpc logs do not populate : vpc_flow_logs_traffic and vpc_flow_logs_security look like the search is looking for data in index=aws_vpc_flow_logs
I noticed that the app SplunkforPaloAltoNetworks is running 5 scheduled searches, and 1 accelerated datamodel I understand that they need to run on the search-head, but I also installed it on the i... See more...
I noticed that the app SplunkforPaloAltoNetworks is running 5 scheduled searches, and 1 accelerated datamodel I understand that they need to run on the search-head, but I also installed it on the indexers, and the are running double searches now, causing unnecessary load, and accelerating an extra copy of the datamodel, that is not used. Can you confirm if the app should be deployed on the indexers. Or if they do if the scheduled searches and datamodels have to be disabled on the app copy on the indexers ?
I noticed that the app cis-controls-app-for-splunk is running 600+ scheduled searches. I understand that they need to run on the search-head, but I also installed it on the indexers, and the are ru... See more...
I noticed that the app cis-controls-app-for-splunk is running 600+ scheduled searches. I understand that they need to run on the search-head, but I also installed it on the indexers, and the are running double searches now, causing unnecessary load. Can you confirm if the app should be deployed on the indexers. Or if they do if the scheduled searches have to be disabled on the indexers ?
I have a CIFS mount from Azure on a server. Then a Splunk forwarder monitoring the mounted folder. I discovered that Splunk can detect the files when starting, but not later when a file is modif... See more...
I have a CIFS mount from Azure on a server. Then a Splunk forwarder monitoring the mounted folder. I discovered that Splunk can detect the files when starting, but not later when a file is modified.
While using the Splunk Add-on for Amazon Web Services and the Splunk App for AWS, I got in a situation where all my inputs stopped after upgrading the add-on from 4.0.0 to 4.1.1 The symptoms were:... See more...
While using the Splunk Add-on for Amazon Web Services and the Splunk App for AWS, I got in a situation where all my inputs stopped after upgrading the add-on from 4.0.0 to 4.1.1 The symptoms were: no data collection many accounts missing or not found on the internal AWS logs. warning icon on the accounts page on the add-on on the region column "Not set, please edit this account" Also, when editing an account from the APP (not the add-on ), the accounts were removed when we saved them. The workaround was to: edit the accounts one by one from the add-on add the region save, and check the warning Recommendation: use the Add-on inputs manager rather than the app one.
How to build a form that does a drilldown to events around the selected event timestamp 1 - show a list of results 2 - click on one of them to select the timestamp 3 - populate a panel that wi... See more...
How to build a form that does a drilldown to events around the selected event timestamp 1 - show a list of results 2 - click on one of them to select the timestamp 3 - populate a panel that will show the events from another search, but look at all events around the time of the selected one. example : show me events 10 minutes before and after the selected one
I noticed that after upgrading to splunk 6.4, my user with a custom role cannot see the "export" button on the bottom of dashboards , or on the top of search results. While my admin see it.