All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @tscroggins    Sorry for the late response.    I have the following version - 5.5.0   I also tried a private incognito browser session and I got the same problem that I cannot even choose a ... See more...
Hi @tscroggins    Sorry for the late response.    I have the following version - 5.5.0   I also tried a private incognito browser session and I got the same problem that I cannot even choose a app when trying to publish a model. So I really dont know how to solve that.    I just can open the model in search and then try to apply it on new data, but I do not know if this is the same. 
I'm going to work my way through all the suggestions.  Since both of the replies suggested THP settings, I'll start there. Thanks
Hello @splunker011  Try use this code: <table> <title>TABLESPACE_FREESPACE</title> <search> <query> index="database" source="tables" | eval BYTES_FREE = replace(BYTES_FREE, ",", "") | eval BY... See more...
Hello @splunker011  Try use this code: <table> <title>TABLESPACE_FREESPACE</title> <search> <query> index="database" source="tables" | eval BYTES_FREE = replace(BYTES_FREE, ",", "") | eval BYTES_USED = replace(BYTES_USED, ",", "") | eval GB_USED = BYTES_USED / (1024 * 1024 * 1024) | eval GB_FREE = BYTES_FREE / (1024 * 1024 * 1024) | eval GB_USED = floor(GB_USED * 100) / 100 | eval GB_FREE = floor(GB_FREE * 100) / 100 | eval CALCULATED_PERCENT_FREE = (GB_FREE / (GB_USED + GB_FREE)) * 100 | eval CALCULATED_PERCENT_FREE = floor(CALCULATED_PERCENT_FREE * 10) / 10 | rename TABLESPACE_NAME as "Tablespace", GB_USED as "Used Space (Gb)", GB_FREE as "Free Space (Gb)", PERCENT_FREE as "Free Space (%)" | table "Tablespace" "Used Space (Gb)" "Free Space (Gb)" "Free Space (%)" </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="count">21</option> <option name="drilldown">none</option> <option name="wrap">false</option> <!-- Use rangeMap to define colors based on "Free Space (%)" --> <format type="color" field="Free Space (%)"> <rangeMap> <range min="0" max="20" color="#FF0000"/> <!-- Red for low free space --> <range min="20" max="50" color="#FFA500"/> <!-- Orange for medium --> <range min="50" max="100" color="#00FF00"/> <!-- Green for high free space --> </rangeMap> </format> </table>   Have a nice day,
is there something wrong in the logic or alternate way to do it  Yes, the logic is wrong with the given dataset.  But before I explain, please remember to post sample data in text for others to ... See more...
is there something wrong in the logic or alternate way to do it  Yes, the logic is wrong with the given dataset.  But before I explain, please remember to post sample data in text for others to check even when screenshot helps illustrate the problem you are trying to diagnose.  So, here is your sample data: GroupA GroupB 353649273 353648649 353649184 353648566 353649091 353616829 353649033 353638941 353648797   353648680   353648745   353648730   353638941   From this dataset, it is easy to see that there is no match in any event. (One event is represented by one row.)  In addition to this, if you are going to compare GroupA and GroupB in their original names, there is no need to use foreach.  The logic expressed in your SPL can easily be implemented with   eval match=if(GroupA=GroupB,GroupA ,null())   Two SPL pointers: 1) use eval function null() is more expressive AND does not spend CPU cycles to look for a nonexistent field name such as NULL; more importantly, 2) foreach operates on a each event (row) individually.  If there is no match within the same event, match will always receive null value. On the second point, @livehybrid makes a speculation of your real intent, which seems to be to seek not a match in individual events between string/numerical fields GroupA and GroupB, but to seek matches in the sets of all values of GroupA and all values of GroupB.  Is this the correct interpretation?  If so, your first logical mistake is to misinterpret the problem to be comparison within individual events. A second mistake you make is in problem statement. i have data from two columns and using a third column to display the matches Given that there is no same-event match, your intention of "using a third column to display the matches" becomes impossible for volunteers here to interpret.  @livehybrid made an effort to interpret your intention as "if any value in the set of all values of GroupA matches any values in the set of all values of GroupB, display the matching values in GroupA together with ALL values of GroupB. (As opposed to any specific values of GroupB.)"  The output from that code is GroupA GroupB match 353638941 353616829 353638941 353648566 353648649 1 Is this what you expect?  What if there are two distinct values in GroupA matching two values GroupB, should the column GroupA display the two matching values and the column GroupB still displaying the same five values? It all comes down to the four golden rules in asking questions in this forum that I call Four Commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search (SPL that volunteers here do not have to look at). Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious.
Hi I am currently trying to reference an SPL variable in simple xml for a table panel in a dashboard. I would like each field value for the "Free space (%)" field to change depending on what the "col... See more...
Hi I am currently trying to reference an SPL variable in simple xml for a table panel in a dashboard. I would like each field value for the "Free space (%)" field to change depending on what the "color" variable in the query evaluates to(green or red). I found one method online which mentions creating a token in a set tag and then referencing in the colorpallete tag but I haven't been able to get it working:   <table> <title>TABLESPACE_FREESPACE</title> <search> <query> index="database" source="tables" | eval BYTES_FREE = replace(BYTES_FREE, ",", "") | eval BYTES_USED = replace(BYTES_USED, ",", "") | eval GB_USED = BYTES_USED / (1024 * 1024 * 1024) | eval GB_FREE = BYTES_FREE / (1024 * 1024 * 1024) | eval GB_USED = floor(GB_USED * 100) / 100 | eval GB_FREE = floor(GB_FREE * 100) / 100 | eval CALCULATED_PERCENT_FREE = (GB_FREE / (GB_USED + GB_FREE)) * 100 | eval CALCULATED_PERCENT_FREE = floor(CALCULATED_PERCENT_FREE * 10) / 10 | eval color = if(CALCULATED_PERCENT_FREE >= PERCENT_FREE, "#00FF00", "#FF0000") | rename TABLESPACE_NAME as "Tablespace", GB_USED as "Used Space (Gb)", GB_FREE as "Free Space (Gb)", PERCENT_FREE as "Free Space (%)" | table "Tablespace" "Used Space (Gb)" "Free Space (Gb)" "Free Space (%)" </query> <earliest>-24h@h</earliest> <latest>now</latest> <done> <set token="color">$result.color$</set> </done> </search> <option name="count">21</option> <option name="drilldown">none</option> <option name="wrap">false</option> <format type="color" field="Free Space (%)"> <colorPalette type="expression">$color$</colorPalette> </format> </table>   Any help would be appreciated, thanks
Hi @richgalloway, Many thanks for your input. I think there were a few things you got wrong here. Let's begin from scratch here: The metrics are collected on Windows UF and sent via a HF to the ... See more...
Hi @richgalloway, Many thanks for your input. I think there were a few things you got wrong here. Let's begin from scratch here: The metrics are collected on Windows UF and sent via a HF to the final IDX. If the index name is defined (inputs.conf) in  the collection app on the UF, and sent directly through the HF to the IDX, and works perfect. If NO index name is defined in above app, the UF default defined index will be used as destination, but here I have defined a props on the HF to "catch" sourcetypes containing 'metrics', and here convert (rename) the incoming (default) index name to its coresponding metric index name (aka from _e_ to _m_ type), and rename part works just fine, but something seems to happen to raw metrics data, as the indexer reject them now - THOUGH it's exactly the same data as in point 1. above.  About your last concern with two indexes, we have additional indexes if needed for different levels of data categories, BUT said that Spunk finaly works fine with search filters, so a lot can be handeled this way - but thanks for you great inputs her Everything works perfect when col
Hi @arunssd, If 1) your KV store collection uses array fields, 2) all field values have a 1:1:1:1 relationship, and 3) there are no empty/missing/null values within a field, i.e. all array values "l... See more...
Hi @arunssd, If 1) your KV store collection uses array fields, 2) all field values have a 1:1:1:1 relationship, and 3) there are no empty/missing/null values within a field, i.e. all array values "line up": asn country maliciousbehavior riskscore 103.152.101.251 => PK => 3 => 9 103.96.75.159 => HK => 3 => 11 104.234.115.155 => CA => 4 => 9 you can transform the data with the transpose, mvexpand, and chart commands: | inputlookup arunssd_kv | transpose 0 | mvexpand "row 1" | chart values("row 1") over _mkv_child by column | fields - _mkv_child | outputlookup arunssd_lookup.csv However, your results may be truncated by mvexpand if the total size of the in-memory result is greater than the limits.conf max_mem_usage_mb setting (default: 500 MB). See https://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf#.5Bmvexpand.5D. If this doesn't work for you, please share your collections.conf (KV store) and transforms.conf (lookup) settings. I used the following settings to test: # collections.conf [arunssd_kv] field.asn = array field.country = array field.maliciousbehavior = array field.riskscore = array # transforms.conf [arunssd_kv] collection = arunssd_kv external_type = kvstore fields_list = asn,country,maliciousbehavior,riskscore If your KV store fields are strings, the search can be adapted with the foreach and eval commands to coerce the fields values into a multi-valued type. You can also transform the results from a shell using curl and jq or your scripting tools of choice.
One cannot redirect event data to a metrics index.  Doing so will produce the error message you see.  Data in a metrics index must be in a specific format - that is what makes them so fast.  It is po... See more...
One cannot redirect event data to a metrics index.  Doing so will produce the error message you see.  Data in a metrics index must be in a specific format - that is what makes them so fast.  It is possible, however, to convert an event into metrics at index time.  See https://docs.splunk.com/Documentation/SplunkCloud/9.3.2408/Metrics/L2MConfiguration I must point out a fundamental flaw in the plan to have only two indexes for each customer.  It means that all data will have the same retention period and (more seriously) all data will be visible to all users in that company.  It's unlikely all of a company's data will have the same security and retention requirements.
@SplunkExplorer- Just to be clear SSL offers compression but during the data transit only. So Let's say If you are sending SSL compressed data from UF to HF. As soon as HF receives the data it unencr... See more...
@SplunkExplorer- Just to be clear SSL offers compression but during the data transit only. So Let's say If you are sending SSL compressed data from UF to HF. As soon as HF receives the data it unencrypt & uncompress it. Now if you apply the SSL compression again between HF to IDX then only HF will compress the data & forward it to IDX.   I hope this helps!!! Kindly upvote if it does!!!
@Joei- I'm not sure if there is any direct connector for Splunk which updates lookup in Splunk. But here is an alternative you can try if you are a developer or someone in your team is a developer an... See more...
@Joei- I'm not sure if there is any direct connector for Splunk which updates lookup in Splunk. But here is an alternative you can try if you are a developer or someone in your team is a developer and can create a custom Python playbook in SOAR.   Splunk offers rest-endpoint to update the lookup which can be leveraged in Python SOAR Playbook to update the lookup. https://docs.splunk.com/Documentation/Splunk/9.4.0/RESTREF/RESTknowledge#data.2Flookup-table-files.2F.7Bname.7D   I hope this helps!!! Kindly upvote if it does!!!
There is a long list of things that potentially could go wrong depending on what you do to the server to harden it.  It's hard to be specific about the results if you can't be specific about the chan... See more...
There is a long list of things that potentially could go wrong depending on what you do to the server to harden it.  It's hard to be specific about the results if you can't be specific about the changes.  We're all volunteers here, so try to meet us halfway.
@SHEBHADAYANA  - Can you please share the details on what you configured exactly and what alert did you receive??  
Thanks, I guess we have no choice but to test it out. In your experience, what could be the impact to Splunk application?
As I mentioned, we want to harden the Linux server following CIS benchmark. There is long list of things to be done so it's hard to put down everything here... The goal is to make the server and the ... See more...
As I mentioned, we want to harden the Linux server following CIS benchmark. There is long list of things to be done so it's hard to put down everything here... The goal is to make the server and the application more secured
@secure- Probably custom command from this App might help. https://splunkbase.splunk.com/app/4297   Kindly upvote if it helps!!!
Thank You for the response. I will investigate option You suggested.
Hi @DataOrg , I used some tables from the Architect Training course, but, as I said, it depends on the number of scheduled searches, from the number of concurrent users and from the presence of Prem... See more...
Hi @DataOrg , I used some tables from the Architect Training course, but, as I said, it depends on the number of scheduled searches, from the number of concurrent users and from the presence of Premium apps. Ciao. Giuseppe
Hi @boknows , host is a metadata cofigured at index time, so it should be set once a time. You could also define a calculated field that overrides the host field but I don't like. So I hint to put... See more...
Hi @boknows , host is a metadata cofigured at index time, so it should be set once a time. You could also define a calculated field that overrides the host field but I don't like. So I hint to put the transformation on the Indexers, and eventually also on the UF even if isn't required. Check, using the regex command in Splunk Search, if there's something different in your events because the regex doesn't run, e.g. a space at the beginning of the event. Ciao. Giuseppe
My understanding is that Azure API Management doesn't support OTel natively. So, I think your solution will require some creativity on the Azure side. Perhaps you could look into ways to enable trace... See more...
My understanding is that Azure API Management doesn't support OTel natively. So, I think your solution will require some creativity on the Azure side. Perhaps you could look into ways to enable traces with Application insights and then see if you can export them to OTel from there (this is just an idea--I've never tried this). You might be able to get metrics using OTel with the azuremonitor receiver. https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/azuremonitorreceiver
I think your understanding of your current scenario is all correct.  It's possible that Azure monitor has a way to create this new dimension there. So, when you export the metric through an integr... See more...
I think your understanding of your current scenario is all correct.  It's possible that Azure monitor has a way to create this new dimension there. So, when you export the metric through an integration like the one used by Splunk Observability Cloud, the custom DBNAME dimension is already there. That's just an idea--I don't know if Azure Monitor has a feature to do this or not, but it seems possible they might. Another possibility would be to collect this metric with an OpenTelemetry collector instead of using the Azure Cloud integration. There is a new OTel receiver being developed called azuremonitor.  https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/azuremonitorreceiver If you can collect this metric with OTel, then you can use an OTel processor to extract the short name of the database and add it to your metric using an OTel attributes processor. https://docs.splunk.com/observability/en/gdi/opentelemetry/components/attributes-processor.html