All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi,   Did you find a fix besides reassinging all the savedsearches without a owner?
@Namdev  Did you complete the following steps? Copy the app to the $SPLUNK_HOME/etc/manager-apps directory on the cluster master node. Push the app from the cluster master to the peer nodes by ru... See more...
@Namdev  Did you complete the following steps? Copy the app to the $SPLUNK_HOME/etc/manager-apps directory on the cluster master node. Push the app from the cluster master to the peer nodes by running the command: /opt/splunk/bin/splunk apply cluster-bundle This updates the cluster configurations on the peer nodes. Verify on the indexers that the app is present in the /opt/splunk/etc/peer-apps directory. If the app is not visible, refer to the official documentation for detailed instructions on how to push the app from the cluster master.  https://docs.splunk.com/Documentation/Splunk/9.4.0/Indexer/Manageappdeployment 
Yes, I  tried using the app option also checked with the _cluster option where I placed the props.conf and transforms.conf files, and distributed them among the peers.
@ww9rivers  The warning message "Pipeline data does not have indexKey" typically indicates that the data being sent to the indexer is missing the necessary index information.  Make sure that the in... See more...
@ww9rivers  The warning message "Pipeline data does not have indexKey" typically indicates that the data being sent to the indexer is missing the necessary index information.  Make sure that the inputs.conf file on your forwarder or heavy forwarder is configured with the correct index. I recommend creating and using a dedicated index instead of the main index, as main is the default index and it's better to keep your data organized.  
@Namdev I suggest starting with a standalone test instance. Create your props.conf and transforms.conf files in either the /opt/splunk/etc/system/local or app/local directory, then restart the Splun... See more...
@Namdev I suggest starting with a standalone test instance. Create your props.conf and transforms.conf files in either the /opt/splunk/etc/system/local or app/local directory, then restart the Splunk instance. After that, open the web interface of the same instance, navigate to the "Add Data" option, and upload your sample log file. Apply your custom sourcetype, "custom_logs," and verify if it's working as expected. If everything looks good, proceed to update the same configuration in the cluster using the cluster master.
so i removed this stanza from default.meta file [savedsearches] owner = admin and it started working how ?
@Namdev  Did you deploy the props.conf and transforms.conf files through the cluster manager? You need to create an app on the cluster manager under /opt/splunk/etc/master-apps/ or /opt/splunk/etc/m... See more...
@Namdev  Did you deploy the props.conf and transforms.conf files through the cluster manager? You need to create an app on the cluster manager under /opt/splunk/etc/master-apps/ or /opt/splunk/etc/manager-apps/. Once the app is deployed, it should be propagated to the indexers, appearing under /opt/splunk/etc/peer-apps/ or /opt/splunk/etc/slave-apps/. Please verify if you have correctly created and deployed the app containing the props.conf and transforms.conf configurations. Update common peer configurations and apps - Splunk Documentation  
I am writing a simple TA to read a text file and turn it into a list of JSON events. I am getting a WARN message for each event from the TcpOutputProc process, such as the one below: 02-21-2025 01... See more...
I am writing a simple TA to read a text file and turn it into a list of JSON events. I am getting a WARN message for each event from the TcpOutputProc process, such as the one below: 02-21-2025 01:06:04.001 -0500 WARN TcpOutputProc [2061704 indexerPipe] - Pipeline data does not have indexKey. I removed the rest of the message containing details. It seems that I am missing something simple. I would greatly appreciate some insights/pointers towards debugging this issue. The TA code is here in GitHub: https://github.com/ww9rivers/TA-json-modinput Many thanks in advance!
Hello Team,parsing issue   I have built a distributed Splunk lab using a trial license. The lab consists of three indexers, one cluster manager, one search head, one instance serving as the Monitor... See more...
Hello Team,parsing issue   I have built a distributed Splunk lab using a trial license. The lab consists of three indexers, one cluster manager, one search head, one instance serving as the Monitoring Console (MC), Deployment Server (DS), and License Manager (LM), along with two Universal Forwarders. The forwarder is monitoring the /opt/log/routerlog directory, where I have placed two log files: cisco_ironport_web.log and cisco_ironport_mail.log. The logs are successfully forwarded to the indexers and then to the search head. However, log parsing is not happening as expected. I have applied the same configuration of props.conf and transforms.conf on both the indexer cluster and the search head.   props.conf and transforms.conf file paths : indexer path : /opt/splunk/etc/peer-apps/_cluster/local Search head  path : /opt/splunk/etc/apps/search/local   configuration of props.conf and transforms.conf :   transforms.conf : [extract_fields] REGEX = ^(?P<timestamp>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})\s+(?P<src_ip>\d+\.\d+\.\d+\.\d+)\s+(?P<email>\S+@\S+)\s+(?P<domain>\S+)\s+(?P<url>\S+) FORMAT = timestamp::$1 src_ip::$2 email::$3 domain::$4 url::$5   props.conf : [custom_logs] SHOULD_LINEMERGE = false TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%d %H:%M:%S MAX_TIMESTAMP_LOOKAHEAD = 19 TRANSFORMS-extract_fields = extract_fields    
Hi, I am seeing the same error message. Has anyone been able to resolve this?
Hello,  I have a fresh install of splunk and Meraki TA App.  I have configured several inputs in the App, however I am seeing a large number of these error messages under various inputs (for exam... See more...
Hello,  I have a fresh install of splunk and Meraki TA App.  I have configured several inputs in the App, however I am seeing a large number of these error messages under various inputs (for example, appliance_vpn_statuses, appliance_vpn_stats) in the following manner: 2025-02-24 03:12:56,971 WARNING pid=50094 tid=MainThread file=cisco_meraki_connect.py:col_eve:597 | Could not identify datetime field for input: cisco_meraki_appliance_vpn_statuses
I have 3 sources that I need to do this for and was able to have 2 come through putting the props in the TA that normalizes the data. The only difference in the 3 data sources is that the data source... See more...
I have 3 sources that I need to do this for and was able to have 2 come through putting the props in the TA that normalizes the data. The only difference in the 3 data sources is that the data source that I cant get to work is there is a space in the logs before its breaks. The regex that I have used for both other data sources is the same one that I I using just with a space prior to it. Not working though.
I got nothing wrong.  Step 2 is not possible.  Yes, you can change the name of the index, but an event cannot be written to a metric index without conversion.  The fact that step 1 works perfectly te... See more...
I got nothing wrong.  Step 2 is not possible.  Yes, you can change the name of the index, but an event cannot be written to a metric index without conversion.  The fact that step 1 works perfectly tells me the data is an event rather than a metric. Splunk has a tendency to overload terms.  in this case, "metric" can refer to a numeric value in an event or it can refer to a specific format of data (also numeric) that only a metric index can store.  it's the format (or lack of it) that's causing the error message.
color isn't listed in the final table command of your search so it doesn't appear in the final result set. If you want a field that isn't displayed in your table, start the field name with and under... See more...
color isn't listed in the final table command of your search so it doesn't appear in the final result set. If you want a field that isn't displayed in your table, start the field name with and underscore e.g. _color and reference that in the done handler. Try something like this   <table> <title>TABLESPACE_FREESPACE</title> <search> <query> index="database" source="tables" | eval BYTES_FREE = replace(BYTES_FREE, ",", "") | eval BYTES_USED = replace(BYTES_USED, ",", "") | eval GB_USED = BYTES_USED / (1024 * 1024 * 1024) | eval GB_FREE = BYTES_FREE / (1024 * 1024 * 1024) | eval GB_USED = floor(GB_USED * 100) / 100 | eval GB_FREE = floor(GB_FREE * 100) / 100 | eval CALCULATED_PERCENT_FREE = (GB_FREE / (GB_USED + GB_FREE)) * 100 | eval CALCULATED_PERCENT_FREE = floor(CALCULATED_PERCENT_FREE * 10) / 10 | eval _color = if(CALCULATED_PERCENT_FREE >= PERCENT_FREE, "#00FF00", "#FF0000") | rename TABLESPACE_NAME as "Tablespace", GB_USED as "Used Space (Gb)", GB_FREE as "Free Space (Gb)", PERCENT_FREE as "Free Space (%)" | table "Tablespace" "Used Space (Gb)" "Free Space (Gb)" "Free Space (%)" _color </query> <earliest>-24h@h</earliest> <latest>now</latest> <done> <set token="color">$result._color$</set> </done> </search> <option name="count">21</option> <option name="drilldown">none</option> <option name="wrap">false</option> <format type="color" field="Free Space (%)"> <colorPalette type="expression">$color|s$</colorPalette> </format> </table>      
Hi @tscroggins    Sorry for the late response.    I have the following version - 5.5.0   I also tried a private incognito browser session and I got the same problem that I cannot even choose a ... See more...
Hi @tscroggins    Sorry for the late response.    I have the following version - 5.5.0   I also tried a private incognito browser session and I got the same problem that I cannot even choose a app when trying to publish a model. So I really dont know how to solve that.    I just can open the model in search and then try to apply it on new data, but I do not know if this is the same. 
I'm going to work my way through all the suggestions.  Since both of the replies suggested THP settings, I'll start there. Thanks
Hello @splunker011  Try use this code: <table> <title>TABLESPACE_FREESPACE</title> <search> <query> index="database" source="tables" | eval BYTES_FREE = replace(BYTES_FREE, ",", "") | eval BY... See more...
Hello @splunker011  Try use this code: <table> <title>TABLESPACE_FREESPACE</title> <search> <query> index="database" source="tables" | eval BYTES_FREE = replace(BYTES_FREE, ",", "") | eval BYTES_USED = replace(BYTES_USED, ",", "") | eval GB_USED = BYTES_USED / (1024 * 1024 * 1024) | eval GB_FREE = BYTES_FREE / (1024 * 1024 * 1024) | eval GB_USED = floor(GB_USED * 100) / 100 | eval GB_FREE = floor(GB_FREE * 100) / 100 | eval CALCULATED_PERCENT_FREE = (GB_FREE / (GB_USED + GB_FREE)) * 100 | eval CALCULATED_PERCENT_FREE = floor(CALCULATED_PERCENT_FREE * 10) / 10 | rename TABLESPACE_NAME as "Tablespace", GB_USED as "Used Space (Gb)", GB_FREE as "Free Space (Gb)", PERCENT_FREE as "Free Space (%)" | table "Tablespace" "Used Space (Gb)" "Free Space (Gb)" "Free Space (%)" </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="count">21</option> <option name="drilldown">none</option> <option name="wrap">false</option> <!-- Use rangeMap to define colors based on "Free Space (%)" --> <format type="color" field="Free Space (%)"> <rangeMap> <range min="0" max="20" color="#FF0000"/> <!-- Red for low free space --> <range min="20" max="50" color="#FFA500"/> <!-- Orange for medium --> <range min="50" max="100" color="#00FF00"/> <!-- Green for high free space --> </rangeMap> </format> </table>   Have a nice day,
is there something wrong in the logic or alternate way to do it  Yes, the logic is wrong with the given dataset.  But before I explain, please remember to post sample data in text for others to ... See more...
is there something wrong in the logic or alternate way to do it  Yes, the logic is wrong with the given dataset.  But before I explain, please remember to post sample data in text for others to check even when screenshot helps illustrate the problem you are trying to diagnose.  So, here is your sample data: GroupA GroupB 353649273 353648649 353649184 353648566 353649091 353616829 353649033 353638941 353648797   353648680   353648745   353648730   353638941   From this dataset, it is easy to see that there is no match in any event. (One event is represented by one row.)  In addition to this, if you are going to compare GroupA and GroupB in their original names, there is no need to use foreach.  The logic expressed in your SPL can easily be implemented with   eval match=if(GroupA=GroupB,GroupA ,null())   Two SPL pointers: 1) use eval function null() is more expressive AND does not spend CPU cycles to look for a nonexistent field name such as NULL; more importantly, 2) foreach operates on a each event (row) individually.  If there is no match within the same event, match will always receive null value. On the second point, @livehybrid makes a speculation of your real intent, which seems to be to seek not a match in individual events between string/numerical fields GroupA and GroupB, but to seek matches in the sets of all values of GroupA and all values of GroupB.  Is this the correct interpretation?  If so, your first logical mistake is to misinterpret the problem to be comparison within individual events. A second mistake you make is in problem statement. i have data from two columns and using a third column to display the matches Given that there is no same-event match, your intention of "using a third column to display the matches" becomes impossible for volunteers here to interpret.  @livehybrid made an effort to interpret your intention as "if any value in the set of all values of GroupA matches any values in the set of all values of GroupB, display the matching values in GroupA together with ALL values of GroupB. (As opposed to any specific values of GroupB.)"  The output from that code is GroupA GroupB match 353638941 353616829 353638941 353648566 353648649 1 Is this what you expect?  What if there are two distinct values in GroupA matching two values GroupB, should the column GroupA display the two matching values and the column GroupB still displaying the same five values? It all comes down to the four golden rules in asking questions in this forum that I call Four Commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search (SPL that volunteers here do not have to look at). Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious.
Hi I am currently trying to reference an SPL variable in simple xml for a table panel in a dashboard. I would like each field value for the "Free space (%)" field to change depending on what the "col... See more...
Hi I am currently trying to reference an SPL variable in simple xml for a table panel in a dashboard. I would like each field value for the "Free space (%)" field to change depending on what the "color" variable in the query evaluates to(green or red). I found one method online which mentions creating a token in a set tag and then referencing in the colorpallete tag but I haven't been able to get it working:   <table> <title>TABLESPACE_FREESPACE</title> <search> <query> index="database" source="tables" | eval BYTES_FREE = replace(BYTES_FREE, ",", "") | eval BYTES_USED = replace(BYTES_USED, ",", "") | eval GB_USED = BYTES_USED / (1024 * 1024 * 1024) | eval GB_FREE = BYTES_FREE / (1024 * 1024 * 1024) | eval GB_USED = floor(GB_USED * 100) / 100 | eval GB_FREE = floor(GB_FREE * 100) / 100 | eval CALCULATED_PERCENT_FREE = (GB_FREE / (GB_USED + GB_FREE)) * 100 | eval CALCULATED_PERCENT_FREE = floor(CALCULATED_PERCENT_FREE * 10) / 10 | eval color = if(CALCULATED_PERCENT_FREE >= PERCENT_FREE, "#00FF00", "#FF0000") | rename TABLESPACE_NAME as "Tablespace", GB_USED as "Used Space (Gb)", GB_FREE as "Free Space (Gb)", PERCENT_FREE as "Free Space (%)" | table "Tablespace" "Used Space (Gb)" "Free Space (Gb)" "Free Space (%)" </query> <earliest>-24h@h</earliest> <latest>now</latest> <done> <set token="color">$result.color$</set> </done> </search> <option name="count">21</option> <option name="drilldown">none</option> <option name="wrap">false</option> <format type="color" field="Free Space (%)"> <colorPalette type="expression">$color$</colorPalette> </format> </table>   Any help would be appreciated, thanks
Hi @richgalloway, Many thanks for your input. I think there were a few things you got wrong here. Let's begin from scratch here: The metrics are collected on Windows UF and sent via a HF to the ... See more...
Hi @richgalloway, Many thanks for your input. I think there were a few things you got wrong here. Let's begin from scratch here: The metrics are collected on Windows UF and sent via a HF to the final IDX. If the index name is defined (inputs.conf) in  the collection app on the UF, and sent directly through the HF to the IDX, and works perfect. If NO index name is defined in above app, the UF default defined index will be used as destination, but here I have defined a props on the HF to "catch" sourcetypes containing 'metrics', and here convert (rename) the incoming (default) index name to its coresponding metric index name (aka from _e_ to _m_ type), and rename part works just fine, but something seems to happen to raw metrics data, as the indexer reject them now - THOUGH it's exactly the same data as in point 1. above.  About your last concern with two indexes, we have additional indexes if needed for different levels of data categories, BUT said that Spunk finaly works fine with search filters, so a lot can be handeled this way - but thanks for you great inputs her Everything works perfect when col