All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi,  You must provide a `model` parameter in your `| ai` command. https://docs.splunk.com/Documentation/MLApp/5.6.0/User/Aboutaicommand#Parameters_for_the_ai_command Also, I assume that you have... See more...
Hi,  You must provide a `model` parameter in your `| ai` command. https://docs.splunk.com/Documentation/MLApp/5.6.0/User/Aboutaicommand#Parameters_for_the_ai_command Also, I assume that you have configured the required details in the Connections Management page https://docs.splunk.com/Documentation/MLApp/5.6.0/User/Aboutaicommand#Connection_Management_page 
This may not be the best place to ask given my issue isn't technically Splunk related, but hopefully I can get some help from people smarter than me anyway.   (?i)(?P<scheme>(?:http|ftp|hxxp)s?(?::... See more...
This may not be the best place to ask given my issue isn't technically Splunk related, but hopefully I can get some help from people smarter than me anyway.   (?i)(?P<scheme>(?:http|ftp|hxxp)s?(?:://|-3A__|%3A%2F%2F))?(?:%[\da-f][\da-f])?(?P<domain>(?:[\p{L}\d\-–]+(?:\.|\[\.\]))+[\p{L}]{2,})(@|%40)?(?:\b| |[[:punct:]]|$) The above regex is a template I'm working from(lol, I'm not nearly good enough to write this). While it's not too hard to read and see how it works, in a nut shell, it matches on the domain of a URL and nothing else. It does this by first looking for the optional beginning 'https://' and storing that in the 'scheme' group. Following that, it parses the following domain.  For example, the URL 'https://community.splunk.com/t5/forums/postpage/board-id/splunk-search' would match 'community.splunk.com'   My issue is that the way it looks for domains following the 'scheme' group requires it use a TLD(.com, .net, etc). Unfortunately, internal services used by my company don't use a TLD, and this causes the regex not to catch them. I need to change this so it can do this.   I want to modify the regex expression above to detect on URLs like: 'https://mysite/resources/rules/123456' wherein the domain would be 'mysite'. I've attempted to do so, but with my limited understanding of how regex really works, my attempts lead to too many matches as shown below. (?i)(?P<scheme>(?:http|ftp|hxxp)s?(?::\/\/|-3A__|%3A%2F%2F))?(?:%[\da-f][\da-f])?(?P<domain>((?:[\p{L}\d\-–]+(?:\.|\[\.\]))+)?[\p{L}]{2,})(@|%40)?(?:\b| |[[:punct:]]|$) I tried to throw in an extra non-capturing group within the named 'domain' ground and make the entire first half of the 'domain' group optional, but it leads to matches beyond the domain. Thank you to whomever may be able to assist. This doesn't feel like it should be such a difficult thing, but it's been vexing me for hours.
I have tried all of the above suggestions and still getting the following error when trying to install MLTK: Error during app install: failed to extract app from C:\Program Files\Splunk\var\run\c6ae... See more...
I have tried all of the above suggestions and still getting the following error when trying to install MLTK: Error during app install: failed to extract app from C:\Program Files\Splunk\var\run\c6ae5d0a07047977.tar.gz to C:\Program Files\Splunk\var\run\splunk\bundle_tmp\878c329ad1cecad1: Operation did not complete successfully because the file contains a virus or potentially unwanted software. I am running Enterprise trial version on Win11 box. In fact, I was not able to find the file C:\Program Files\Splunk\var\run\c6ae5d0a07047977.tar.gz nor any file in the extracted files (downloaded the PSC zip from splunk) with .tar or .tar.gz extension I am in the middle of a Coursera course and am stuck because I can't install PSC nor MLTK. Help please.
****update**** did a new install on windows and everything is now working with the same test files. going to blow up ubuntu server and reimage and try the install again. So I am thinking it has somet... See more...
****update**** did a new install on windows and everything is now working with the same test files. going to blow up ubuntu server and reimage and try the install again. So I am thinking it has something to do with how the install was done. _______________________________________________________________________________________     I am working with eventgen. I have my eventgen.conf file and some sample files. I am working with the toke and regex commands in the eventgen.conf. I can get all commands to work except mvfile. I tried several ways to create the sample file but eventgen will not read the file and kicks errors such as file doesn't exist or "0 columns". I created a file with a single line of items separated by a comma and still no go. If i create a file with a single item in it whether it be a word or number, eventgen will find it and add it to the search results. If i change it to mvfile and use :1, it will not read the same file and will kick an error. Anyone please give me some guidance on why the mvfile doesn't work. Any help would be greatly appreciated. Search will pull results from (random, file, timestamp) commands, just not mvfile snip from eventgen.conf "token.4.token = nodeIP=(\w+) token.4.replacementType = mvfile token.4.replacement = $SPLUNK_HOME/etc/apps/SA-Eventgen/samples/nodename.sample:2" snip from nodename.sample host01,10.11.0.1 host02,10.12.0.2 host03,10.13.0.3 Infrastructure ubuntu server 24.04 Splunk 9.4.3 eventgen 8.2.0   I have tried to create a file from scratch with Notepad++, notepad, excel, and directly on the linux server in the samples folder.  I have validated that file as a csv file with "goteleport" and "csvlint" sites        
I was trying to make this work too but unfortunately @chrisboy68 I'm also a bit of a dead end.  Prefixing by eval-ing the field for me breaks the chart as its no longer a numeric value, there is no ... See more...
I was trying to make this work too but unfortunately @chrisboy68 I'm also a bit of a dead end.  Prefixing by eval-ing the field for me breaks the chart as its no longer a numeric value, there is no option to add a prefix/suffix in the visualisation; I scoured the non-UI based options from the viz docs (e.g. https://splunkui.splunk.com/Packages/visualizations/Column) but also couldnt find any way to do this, sorry!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
If you are ingesting with UF then props and transforms should work as in on prem.  You must just install those into first full splunk enterprise node. What is this “the add on”? And is it running on... See more...
If you are ingesting with UF then props and transforms should work as in on prem.  You must just install those into first full splunk enterprise node. What is this “the add on”? And is it running on HF or UF? If you have lot of logs to filter then you probably want to use IHFs between UFs and your indexer cluster?  
Agree with @PickleRick that you need to clearly demonstrate raw data because the I don't think your raw log looks like what you show.  Is it more like the following? name|fname|desc|group|cat|exp|se... See more...
Agree with @PickleRick that you need to clearly demonstrate raw data because the I don't think your raw log looks like what you show.  Is it more like the following? name|fname|desc|group|cat|exp|set|in abc|abc||Administrators;Users|S||1|1 bbb|bbb|Internal||N||2|2 ccc|ccc|MFT Service ID|Administrators;Users|S||3|3 In other words, it is multiline pipe (|) delimited text with a header line. (Like default table list from many SQL DBMS's.) The format shown in your original description cannot be reliably processed. If my speculation about your raw data is correct, you first change delimiter to comma, then use multikv to extract from the table, like this: | rex mode=sed "s/\|/,/g" | multikv forceheader=1 | table name fname desc group cat exp set in  Here is an emulation for you to play with and compare with real data: | makeresults | fields - _time | eval _raw = "name|fname|desc|group|cat|exp|set|in abc|abc||Administrators;Users|S||1|1 bbb|bbb|Internal||N||2|2 ccc|ccc|MFT Service ID|Administrators;Users|S||3|3" ``` data emulation above ``` Output from this emulation is name fname desc group cat exp set in abc abc   Administrators;Users S   1 1 bbb bbb Internal   N   2 2 ccc ccc MFT Service ID Administrators;Users S   3 3
Hi @PickleRick  Agreed, however, when i start the Splunk after accepting the license agreement, i run into the following screenshot which takes care of the seamless migration, I believe what I'm d... See more...
Hi @PickleRick  Agreed, however, when i start the Splunk after accepting the license agreement, i run into the following screenshot which takes care of the seamless migration, I believe what I'm doing must be a documented procedure and nothing unusual and it also creates a migration logs with the details of what was done during the process... please lemme know your thoughts!! Thanks for your help & Happy 4th!! Download migration log from here: https://limewire.com/d/Jd4GD#NEdMoeWwVg  
Hi @dav2  Your user needs to have either the "power" or "admin" role in order to be able to update that KV Store Collection. Please check if the user has one of these roles, such as "power" and see... See more...
Hi @dav2  Your user needs to have either the "power" or "admin" role in order to be able to update that KV Store Collection. Please check if the user has one of these roles, such as "power" and see if this resolves the issue.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
following my last post, I think this should hopefully work for you. { "type": "splunk.table", "dataSources": { "primary": "ds_b4QqXqtO" }, "options": { "tableFormat":... See more...
following my last post, I think this should hopefully work for you. { "type": "splunk.table", "dataSources": { "primary": "ds_b4QqXqtO" }, "options": { "tableFormat": { "rowBackgroundColors": "> table | seriesByName(\"file\") | matchValue(tableRowBackgroundColor)" } }, "context": { "tableRowBackgroundColor": [ { "match": "ce", "value": "#4E79A7" }, { "match": "edit", "value": "#F28E2B" }, { "match": "service_overview", "value": "#E15759" }, { "match": "e2e_ritm", "value": "#76B7B2" }, { "match": "e2e_task", "value": "#59A14F" }, { "match": "monitor", "value": "#EDC948" }, { "match": "sla__time_to_first_response", "value": "#B07AA1" }, { "match": "sla__time_to_resolution", "value": "#FF9DA7" }, { "match": "*", "value": "#FFFFFF" } ] }, "containerOptions": {}, "showProgressBar": false, "showLastUpdated": false }  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Federico92  Here is an example we should hopefully help: { "type": "splunk.table", "dataSources": { "primary": "ds_aOEeGNWG" }, "options": { "tableFormat": {... See more...
Hi @Federico92  Here is an example we should hopefully help: { "type": "splunk.table", "dataSources": { "primary": "ds_aOEeGNWG" }, "options": { "tableFormat": { "rowBackgroundColors": "> table | seriesByName(\"host\") | matchValue(tableRowBackgroundColor)" } }, "context": { "tableRowBackgroundColor": [ { "match": "macdev", "value": "#FF0000" }, { "match": "cultivar", "value": "#00FF00" }, { "match": "*", "value": "#FFFFFF" } ] }, "containerOptions": {}, "showProgressBar": false, "showLastUpdated": false }  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Table is just a method of visualizing data. You need to parse your data into fields. So the question is what the real data looks like (please copy-paste the raw event into a code block or a preforma... See more...
Table is just a method of visualizing data. You need to parse your data into fields. So the question is what the real data looks like (please copy-paste the raw event into a code block or a preformatted paragraph) and what it means. For now you have a lot of pipe-delimited "fields" but no way of knowing which of them are the "header", which are "data" and how many of "data" rows are there.
I have a log events that looks like this... "name|fname|desc|group|cat|exp|set|in abc|abc||Administrators;Users|S||1|1 bbb|bbb|Internal||N||2|2 ccc|ccc|MFT Service ID|Administrators;Users|S||3|3"... See more...
I have a log events that looks like this... "name|fname|desc|group|cat|exp|set|in abc|abc||Administrators;Users|S||1|1 bbb|bbb|Internal||N||2|2 ccc|ccc|MFT Service ID|Administrators;Users|S||3|3" the  log event's text is delimited by 6 spaces... What splunk query do I use to create splunk table like this name fname desc group cat exp set in abc abc   Administrators;Users S   1 1 bbb bbb Interna   N   2 2 ccc ccc MFT Service ID Administrators;Users S   3 3
After recently upgrading the Splunk_TA_nix to version 9.2.0, I'm seeing the same issue.  Has anyone fixed this issue?
Hi all, I want to create a table in which row colours change based on row value. In attachment source code {     "type": "splunk.table",     "options": {         "fontWeight": "bold",   ... See more...
Hi all, I want to create a table in which row colours change based on row value. In attachment source code {     "type": "splunk.table",     "options": {         "fontWeight": "bold",         "headerVisibility": "none",         "rowColors": {             "mode": "categorical",             "categoricalColors": {                 "ce": "#4E79A7",                 "edit": "#F28E2B",                 "service_overview": "#E15759",                 "e2e_ritm": "#76B7B2",                 "e2e_task": "#59A14F",                 "monitor": "#EDC948",                 "sla__time_to_first_response": "#B07AA1",                 "sla__time_to_resolution": "#FF9DA7"             },             "field": "file"         },         "columnFormat": {             "placeholder": {                 "data": "> table | seriesByName(\"placeholder\") | formatByType(placeholderColumnFormatEditorConfig)"             },             "file": {                 "data": "> table | seriesByName(\"file\") | formatByType(fileColumnFormatEditorConfig)"             }         }     },     "dataSources": {         "primary": "ds_b4QqXqtO"     },     "title": "Legend",     "context": {         "placeholderColumnFormatEditorConfig": {             "string": {                 "unitPosition": "after"             }         },         "fileColumnFormatEditorConfig": {             "string": {                 "unitPosition": "after"             }         }     },     "containerOptions": {},     "showProgressBar": false,     "showLastUpdated": false } The code seems to be correct but it doesn't work. I want to know what is wrong and especially if the function i want is supported.  Thanks in advance
Hi @beano501 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Thanks for the responses, I had only really considered using "summary indexes" as part of the usual summary index commands sitimechart etc. What I have got working is index=xxxxx sourcetype="mscs... See more...
Thanks for the responses, I had only really considered using "summary indexes" as part of the usual summary index commands sitimechart etc. What I have got working is index=xxxxx sourcetype="mscs:kql" | eval _raw = SyslogMessage | fields _raw | collect index=main sourcetype=fortigate_event run_in_preview=true Which achieves what I am after. I appreciate this approach would impact licensing, but it will be low volume.    Thanks again
Hi Everyone,   we are encountering a problem with the Automated Introspection feature for Data Inventory in Splunk Security Essentials. Although the introspection process seems to runs just fine, i... See more...
Hi Everyone,   we are encountering a problem with the Automated Introspection feature for Data Inventory in Splunk Security Essentials. Although the introspection process seems to runs just fine, it fails to save the data. On the UI, there are no error messages displayed. However the introspection process does not map any data as expected. We analyzed the situation using the development console in the browser, as Splunk does not seem to provide error messages at this point in the UI. Following are the specifics of the request and the response we received:   Request Details: Request URL: https://our-splunk-instance.com/servicesNS/nobody/Splunk_Security_Essentials/storage/collections/data/data_inventory_products/batch_save Request Method:  POST​ Status Code:  403 Forbidden​   Response Message: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">User '[username]' with roles { [role1], [role2], ... } cannot write to the collection: /nobody/Splunk_Security_Essentials/collections/data_inventory_products { read : [ * ], write : [ admin, power ] }, export: global, owner: nobody, removable: no, modtime: [timestamp]</msg> </messages> </response>   The error message suggests that the user [username] does not have the necessary write permissions for the specified collection. The roles assigned to this user include [role1], [role2], ..., which appear to lack the required write access. Steps we have taken so far:   We have reviewed the permissions settings and suspect that the issue is related to insufficient write permissions. We consulted the documentation on editing permissions to provide write access: Edit permissions to provide write access to Splunk Security Essentials - Splunk Documentation. Can anyone provide guidance on any troubleshooting steps that might resolve this issue? We are particularly interested in understanding how to grant the necessary write access to the user or roles involved.   Thank you in advance for your support!   Best regards
Which if your indexers are using a different partition for their storage, could be anywhere.    I found that I was missing the link too,    put note that I've put the link in at kvstore level rather ... See more...
Which if your indexers are using a different partition for their storage, could be anywhere.    I found that I was missing the link too,    put note that I've put the link in at kvstore level rather than the mongo  ln -s /splunkdata/kvstore /opt/splunk/var/lib/splunk/kvstore   Where /splunkdata/ is my mounted data drive where all my indexes go.
Try save again exist drill-down search (even without real changes) or create form scratch. After the „changes” tokens $info_min_time$ and $info_max_time$ start working good.