All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

****update**** did a new install on windows and everything is now working with the same test files. going to blow up ubuntu server and reimage and try the install again. So I am thinking it has somet... See more...
****update**** did a new install on windows and everything is now working with the same test files. going to blow up ubuntu server and reimage and try the install again. So I am thinking it has something to do with how the install was done. _______________________________________________________________________________________     I am working with eventgen. I have my eventgen.conf file and some sample files. I am working with the toke and regex commands in the eventgen.conf. I can get all commands to work except mvfile. I tried several ways to create the sample file but eventgen will not read the file and kicks errors such as file doesn't exist or "0 columns". I created a file with a single line of items separated by a comma and still no go. If i create a file with a single item in it whether it be a word or number, eventgen will find it and add it to the search results. If i change it to mvfile and use :1, it will not read the same file and will kick an error. Anyone please give me some guidance on why the mvfile doesn't work. Any help would be greatly appreciated. Search will pull results from (random, file, timestamp) commands, just not mvfile snip from eventgen.conf "token.4.token = nodeIP=(\w+) token.4.replacementType = mvfile token.4.replacement = $SPLUNK_HOME/etc/apps/SA-Eventgen/samples/nodename.sample:2" snip from nodename.sample host01,10.11.0.1 host02,10.12.0.2 host03,10.13.0.3 Infrastructure ubuntu server 24.04 Splunk 9.4.3 eventgen 8.2.0   I have tried to create a file from scratch with Notepad++, notepad, excel, and directly on the linux server in the samples folder.  I have validated that file as a csv file with "goteleport" and "csvlint" sites        
I was trying to make this work too but unfortunately @chrisboy68 I'm also a bit of a dead end.  Prefixing by eval-ing the field for me breaks the chart as its no longer a numeric value, there is no ... See more...
I was trying to make this work too but unfortunately @chrisboy68 I'm also a bit of a dead end.  Prefixing by eval-ing the field for me breaks the chart as its no longer a numeric value, there is no option to add a prefix/suffix in the visualisation; I scoured the non-UI based options from the viz docs (e.g. https://splunkui.splunk.com/Packages/visualizations/Column) but also couldnt find any way to do this, sorry!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
If you are ingesting with UF then props and transforms should work as in on prem.  You must just install those into first full splunk enterprise node. What is this “the add on”? And is it running on... See more...
If you are ingesting with UF then props and transforms should work as in on prem.  You must just install those into first full splunk enterprise node. What is this “the add on”? And is it running on HF or UF? If you have lot of logs to filter then you probably want to use IHFs between UFs and your indexer cluster?  
Agree with @PickleRick that you need to clearly demonstrate raw data because the I don't think your raw log looks like what you show.  Is it more like the following? name|fname|desc|group|cat|exp|se... See more...
Agree with @PickleRick that you need to clearly demonstrate raw data because the I don't think your raw log looks like what you show.  Is it more like the following? name|fname|desc|group|cat|exp|set|in abc|abc||Administrators;Users|S||1|1 bbb|bbb|Internal||N||2|2 ccc|ccc|MFT Service ID|Administrators;Users|S||3|3 In other words, it is multiline pipe (|) delimited text with a header line. (Like default table list from many SQL DBMS's.) The format shown in your original description cannot be reliably processed. If my speculation about your raw data is correct, you first change delimiter to comma, then use multikv to extract from the table, like this: | rex mode=sed "s/\|/,/g" | multikv forceheader=1 | table name fname desc group cat exp set in  Here is an emulation for you to play with and compare with real data: | makeresults | fields - _time | eval _raw = "name|fname|desc|group|cat|exp|set|in abc|abc||Administrators;Users|S||1|1 bbb|bbb|Internal||N||2|2 ccc|ccc|MFT Service ID|Administrators;Users|S||3|3" ``` data emulation above ``` Output from this emulation is name fname desc group cat exp set in abc abc   Administrators;Users S   1 1 bbb bbb Internal   N   2 2 ccc ccc MFT Service ID Administrators;Users S   3 3
Hi @PickleRick  Agreed, however, when i start the Splunk after accepting the license agreement, i run into the following screenshot which takes care of the seamless migration, I believe what I'm d... See more...
Hi @PickleRick  Agreed, however, when i start the Splunk after accepting the license agreement, i run into the following screenshot which takes care of the seamless migration, I believe what I'm doing must be a documented procedure and nothing unusual and it also creates a migration logs with the details of what was done during the process... please lemme know your thoughts!! Thanks for your help & Happy 4th!! Download migration log from here: https://limewire.com/d/Jd4GD#NEdMoeWwVg  
Hi @dav2  Your user needs to have either the "power" or "admin" role in order to be able to update that KV Store Collection. Please check if the user has one of these roles, such as "power" and see... See more...
Hi @dav2  Your user needs to have either the "power" or "admin" role in order to be able to update that KV Store Collection. Please check if the user has one of these roles, such as "power" and see if this resolves the issue.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
following my last post, I think this should hopefully work for you. { "type": "splunk.table", "dataSources": { "primary": "ds_b4QqXqtO" }, "options": { "tableFormat":... See more...
following my last post, I think this should hopefully work for you. { "type": "splunk.table", "dataSources": { "primary": "ds_b4QqXqtO" }, "options": { "tableFormat": { "rowBackgroundColors": "> table | seriesByName(\"file\") | matchValue(tableRowBackgroundColor)" } }, "context": { "tableRowBackgroundColor": [ { "match": "ce", "value": "#4E79A7" }, { "match": "edit", "value": "#F28E2B" }, { "match": "service_overview", "value": "#E15759" }, { "match": "e2e_ritm", "value": "#76B7B2" }, { "match": "e2e_task", "value": "#59A14F" }, { "match": "monitor", "value": "#EDC948" }, { "match": "sla__time_to_first_response", "value": "#B07AA1" }, { "match": "sla__time_to_resolution", "value": "#FF9DA7" }, { "match": "*", "value": "#FFFFFF" } ] }, "containerOptions": {}, "showProgressBar": false, "showLastUpdated": false }  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Federico92  Here is an example we should hopefully help: { "type": "splunk.table", "dataSources": { "primary": "ds_aOEeGNWG" }, "options": { "tableFormat": {... See more...
Hi @Federico92  Here is an example we should hopefully help: { "type": "splunk.table", "dataSources": { "primary": "ds_aOEeGNWG" }, "options": { "tableFormat": { "rowBackgroundColors": "> table | seriesByName(\"host\") | matchValue(tableRowBackgroundColor)" } }, "context": { "tableRowBackgroundColor": [ { "match": "macdev", "value": "#FF0000" }, { "match": "cultivar", "value": "#00FF00" }, { "match": "*", "value": "#FFFFFF" } ] }, "containerOptions": {}, "showProgressBar": false, "showLastUpdated": false }  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Table is just a method of visualizing data. You need to parse your data into fields. So the question is what the real data looks like (please copy-paste the raw event into a code block or a preforma... See more...
Table is just a method of visualizing data. You need to parse your data into fields. So the question is what the real data looks like (please copy-paste the raw event into a code block or a preformatted paragraph) and what it means. For now you have a lot of pipe-delimited "fields" but no way of knowing which of them are the "header", which are "data" and how many of "data" rows are there.
I have a log events that looks like this... "name|fname|desc|group|cat|exp|set|in abc|abc||Administrators;Users|S||1|1 bbb|bbb|Internal||N||2|2 ccc|ccc|MFT Service ID|Administrators;Users|S||3|3"... See more...
I have a log events that looks like this... "name|fname|desc|group|cat|exp|set|in abc|abc||Administrators;Users|S||1|1 bbb|bbb|Internal||N||2|2 ccc|ccc|MFT Service ID|Administrators;Users|S||3|3" the  log event's text is delimited by 6 spaces... What splunk query do I use to create splunk table like this name fname desc group cat exp set in abc abc   Administrators;Users S   1 1 bbb bbb Interna   N   2 2 ccc ccc MFT Service ID Administrators;Users S   3 3
After recently upgrading the Splunk_TA_nix to version 9.2.0, I'm seeing the same issue.  Has anyone fixed this issue?
Hi all, I want to create a table in which row colours change based on row value. In attachment source code {     "type": "splunk.table",     "options": {         "fontWeight": "bold",   ... See more...
Hi all, I want to create a table in which row colours change based on row value. In attachment source code {     "type": "splunk.table",     "options": {         "fontWeight": "bold",         "headerVisibility": "none",         "rowColors": {             "mode": "categorical",             "categoricalColors": {                 "ce": "#4E79A7",                 "edit": "#F28E2B",                 "service_overview": "#E15759",                 "e2e_ritm": "#76B7B2",                 "e2e_task": "#59A14F",                 "monitor": "#EDC948",                 "sla__time_to_first_response": "#B07AA1",                 "sla__time_to_resolution": "#FF9DA7"             },             "field": "file"         },         "columnFormat": {             "placeholder": {                 "data": "> table | seriesByName(\"placeholder\") | formatByType(placeholderColumnFormatEditorConfig)"             },             "file": {                 "data": "> table | seriesByName(\"file\") | formatByType(fileColumnFormatEditorConfig)"             }         }     },     "dataSources": {         "primary": "ds_b4QqXqtO"     },     "title": "Legend",     "context": {         "placeholderColumnFormatEditorConfig": {             "string": {                 "unitPosition": "after"             }         },         "fileColumnFormatEditorConfig": {             "string": {                 "unitPosition": "after"             }         }     },     "containerOptions": {},     "showProgressBar": false,     "showLastUpdated": false } The code seems to be correct but it doesn't work. I want to know what is wrong and especially if the function i want is supported.  Thanks in advance
Hi @beano501 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Thanks for the responses, I had only really considered using "summary indexes" as part of the usual summary index commands sitimechart etc. What I have got working is index=xxxxx sourcetype="mscs... See more...
Thanks for the responses, I had only really considered using "summary indexes" as part of the usual summary index commands sitimechart etc. What I have got working is index=xxxxx sourcetype="mscs:kql" | eval _raw = SyslogMessage | fields _raw | collect index=main sourcetype=fortigate_event run_in_preview=true Which achieves what I am after. I appreciate this approach would impact licensing, but it will be low volume.    Thanks again
Hi Everyone,   we are encountering a problem with the Automated Introspection feature for Data Inventory in Splunk Security Essentials. Although the introspection process seems to runs just fine, i... See more...
Hi Everyone,   we are encountering a problem with the Automated Introspection feature for Data Inventory in Splunk Security Essentials. Although the introspection process seems to runs just fine, it fails to save the data. On the UI, there are no error messages displayed. However the introspection process does not map any data as expected. We analyzed the situation using the development console in the browser, as Splunk does not seem to provide error messages at this point in the UI. Following are the specifics of the request and the response we received:   Request Details: Request URL: https://our-splunk-instance.com/servicesNS/nobody/Splunk_Security_Essentials/storage/collections/data/data_inventory_products/batch_save Request Method:  POST​ Status Code:  403 Forbidden​   Response Message: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">User '[username]' with roles { [role1], [role2], ... } cannot write to the collection: /nobody/Splunk_Security_Essentials/collections/data_inventory_products { read : [ * ], write : [ admin, power ] }, export: global, owner: nobody, removable: no, modtime: [timestamp]</msg> </messages> </response>   The error message suggests that the user [username] does not have the necessary write permissions for the specified collection. The roles assigned to this user include [role1], [role2], ..., which appear to lack the required write access. Steps we have taken so far:   We have reviewed the permissions settings and suspect that the issue is related to insufficient write permissions. We consulted the documentation on editing permissions to provide write access: Edit permissions to provide write access to Splunk Security Essentials - Splunk Documentation. Can anyone provide guidance on any troubleshooting steps that might resolve this issue? We are particularly interested in understanding how to grant the necessary write access to the user or roles involved.   Thank you in advance for your support!   Best regards
Which if your indexers are using a different partition for their storage, could be anywhere.    I found that I was missing the link too,    put note that I've put the link in at kvstore level rather ... See more...
Which if your indexers are using a different partition for their storage, could be anywhere.    I found that I was missing the link too,    put note that I've put the link in at kvstore level rather than the mongo  ln -s /splunkdata/kvstore /opt/splunk/var/lib/splunk/kvstore   Where /splunkdata/ is my mounted data drive where all my indexes go.
Try save again exist drill-down search (even without real changes) or create form scratch. After the „changes” tokens $info_min_time$ and $info_max_time$ start working good.
As @gcusello says, with summary indexing, you have to bear in mind the licensing costs, and it partly depends on the sourcetype (stash vs non-stash), but it also depends on ingest- or workload- -base... See more...
As @gcusello says, with summary indexing, you have to bear in mind the licensing costs, and it partly depends on the sourcetype (stash vs non-stash), but it also depends on ingest- or workload- -based licensing model - for more details see here 
Hi @beano501 , as also @ITWhisperer hinted, the best solution is scheduling a search that extracts only the fields you need and storage results in a summary index using the collect command. Only on... See more...
Hi @beano501 , as also @ITWhisperer hinted, the best solution is scheduling a search that extracts only the fields you need and storage results in a summary index using the collect command. Only one attention: you cannot use a sourcetype different than "stash" because, otherwise you have to pay twice the license. Ciao. Giuseppe
Hi @isoutamo , we are using an on-premise Splunk Enterprise version 9.4.2 in a distributed environment with a multi-site indexer cluster and a search head cluster. We have right now ingesting  OS l... See more...
Hi @isoutamo , we are using an on-premise Splunk Enterprise version 9.4.2 in a distributed environment with a multi-site indexer cluster and a search head cluster. We have right now ingesting  OS logs, Security Logs and Application logs from Windows and Linux Servers using Universal Forwarders. Some of Company's application are hosted in AWS and Microsoft Azure, we wanted to ingest the Security logs of those applications to monitor them for cybersecurity purposes. But, when we connected to the cloud using the add-on, we where getting a lot of unwanted logs which led license over-utilization.  When we tried filtering, due to the large amount of logs and continuous filtering our Splunk servers had high utilization which led to the whole Splunk service slowing down. Hence, I wanted a method where we can filter out the unwanted logs or select only the required logs before it enters the Splunk servers. Even if the solution is not from Splunk but from AWS or Azure.  It would fine as long as we can send logs to Splunk .