All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

On a new fresh deployment of O11y, we are following the guide from the setup wizard and running the following helm install command   helm install splunk-otel-collector --set="cloudProvider=aws,dist... See more...
On a new fresh deployment of O11y, we are following the guide from the setup wizard and running the following helm install command   helm install splunk-otel-collector --set="cloudProvider=aws,distribution=eks,splunkObservability.accessToken=xxxx-xxxxx,clusterName=eks-test-cluster,splunkObservability.realm=eu2,gateway.enabled=true,splunkPlatform.endpoint=https://my-splunk-cloud-hec-input,splunkPlatform.token=my-token,splunkObservability.profilingEnabled=true,environment=test,operator.enabled=true,agent.discovery.enabled=true" splunk-otel-collector-chart/splunk-otel-collector   However this fails with the following error: Error: INSTALLATION FAILED: template: splunk-otel-collector/templates/operator/instrumentation.yaml:2:4: executing "splunk-otel-collector/templates/operator/instrumentation.yaml" at <include "splunk-otel-collector.operator.validation-rules" .>: error calling include: template: splunk-otel-collector/templates/operator/_helpers.tpl:17:13: executing "splunk-otel-collector.operator.validation-rules" at <.Values.instrumentation.exporter.endpoint>: nil pointer evaluating interface {}.endpoint This seems to be because  _helpers.tpl is expecting a value for instrumentation.exporter.endpoint however the value according to the chart (and the documentation) is instrumentation.endpoint https://github.com/signalfx/splunk-otel-collector-chart/blob/main/helm-charts/splunk-otel-collector/templates/operator/_helpers.tpl Line 13 is where it is mentioned. We have tried providing instrumentation.exporter.endpoint as an additional parameter - but instead get the error: Values don't meet the specifications of the schema(s) in the following chart(s): splunk-otel-collector: - instrumentation: Additional property exporter is not allowed  (Which is true - instrumentation.exporter.endpoint is not defined in here: https://github.com/signalfx/splunk-otel-collector-chart/blob/main/helm-charts/splunk-otel-collector/templates/operator/instrumentation.yaml ) line 20 We also get the same error if we provide a complete values.yaml file with both formats of the instrumentation endpoint defined.    It looks like _helpers.tpl was edited to include this endpoint specification about a month ago, so surely we can not be the first people to be tripped up by this? Is there anything else I can try or do we need to wait for the operator to be fixed?  
Hi @Wardy1380 , yes, you can assign to a user both Analyst and Admin roles and he/she can use the features that he/she needs for it's job. It isn't possible to choose the role to use, one at a time... See more...
Hi @Wardy1380 , yes, you can assign to a user both Analyst and Admin roles and he/she can use the features that he/she needs for it's job. It isn't possible to choose the role to use, one at a time: you have all the features for the roles that you are enabled; in other words, you cannot have the Analyst role and then decide to elevate himself/helsekf to the Admin role: he/she always has all the features from his/her roles. If you want to separate roles, you have to use two different accounts, one for each role. About ISO27001 (I'm also an Lead Auditor), I'm not sure that there's a requirement that exactly requires to elevate a user role, but only the requisite of "need to know" and Splunk satisfies this requirement. Ciao. Giuseppe
Have you reach out to anyone else, or find an alternate solution? Seems like Splunk support is free lacking in this. 
Does Splunk on Prem or cloud have a solution that allows users to be an Analyst when doing that role and sign in or elevate to Admin when carrying out Admin tasks.  This is something that many standa... See more...
Does Splunk on Prem or cloud have a solution that allows users to be an Analyst when doing that role and sign in or elevate to Admin when carrying out Admin tasks.  This is something that many standards require such as ISO27001 and others.  I hope someone can help or if not I will make the feature request.
Not really. Please provide a sample anonymised event in raw format, in a code block (using the </> button above) to preserve the format of the event. Please indicate which part of the event you want ... See more...
Not really. Please provide a sample anonymised event in raw format, in a code block (using the </> button above) to preserve the format of the event. Please indicate which part of the event you want extracted.
This post is old but unanswered. I did it this way:  index=itsi_grouped_alerts | lookup itsi_notable_group_user_lookup event_identifier_hash as itsi_group_id | search status=1
Hi @LittleFatFish , in this case, check if the sourcetype in props.conf is correct and especially if it's overrided, maybe when the transformation is applied your events still have the original sour... See more...
Hi @LittleFatFish , in this case, check if the sourcetype in props.conf is correct and especially if it's overrided, maybe when the transformation is applied your events still have the original sourcetype. Then obviously (but I'm sure that you already did it) check again the regex in transforma.conf. Ciao. Giuseppe
Hi, brother, I have encountered a confusing problem regarding Postgres data integration. I want to execute this statement every two minutes to retrieve the latest results from the database, so I us... See more...
Hi, brother, I have encountered a confusing problem regarding Postgres data integration. I want to execute this statement every two minutes to retrieve the latest results from the database, so I use the following statement for filtering. SELECT history_uint.itemid, history_uint.value, interface.ip, items.name, hosts.host, TO_TIMESTAMP(history_uint.clock) AS clock_datetime FROM history_uint JOIN items ON history_uint.itemid = items.itemid JOIN hosts ON items.hostid = hosts.hostid JOIN interface ON interface.hostid = hosts.hostid WHERE history_uint.clock > (FLOOR(EXTRACT(EPOCH FROM NOW())) - 120) AND items.flags = 4 AND (items.name LIKE '%Bits sent%' OR items.name LIKE '%Bits received%') AND (hosts.host LIKE 'CN%' OR hosts.host LIKE 'os%') ORDER BY history_uint.clock DESC LIMIT 90000;  This SQL statement executes perfectly in database tools However, this SQL statement cannot be executed in dbxquery, and the error is unknown I found that the key reason is the following SQL statement: Among them, history_uint.clock>(FLOOR (EXTRACT (EPOCH) FROM NOW())) - 120) When I replaced (FLOOR (EXTRACT (EPOCHFAM NOW()) -120) with Unix timestamps, everything was fine I tried to replace it with other SQL statements, but they all failed. Please help me analyze this issue together. thank you. Add some environmental information: Postgres version: 14.9 Java version 21.0.5 DB connet version 3.18.1 Postgres JDBC version 1.2.1 thank you
I have a suspicion that this is another homework assignment as multiple people posted exactly same questions at the same time.  But if there is a real use case, try assign search terms to token direc... See more...
I have a suspicion that this is another homework assignment as multiple people posted exactly same questions at the same time.  But if there is a real use case, try assign search terms to token directly.  Like this:   <form version="1.1"> <label>Pass codebase</label> <description>https://community.splunk.com/t5/Splunk-Search/passing-diffrent-base-search-based-on-inputput-dropdown-values/m-p/703055</description> <search> <query> | makeresults | eval curTime=strftime(now(), "GMT%z") | eval curTime=substr(curTime,1,6) |rename curTime as current_time </query> <progress> <set token="time_token_now">$result.current_time$</set> </progress> </search> <fieldset submitButton="false" autoRun="true"> <input type="radio" token="field1"> <label>field1</label> <choice value="All">All</choice> <choice value="M1">M1</choice> <choice value="A2">A2</choice> <change> <eval token="base_token">case("All"==field1, "| makeresults | eval message = \"Give me an A\"", "All"!=field1, "| makeresults | eval message = \"Give me a B\"")</eval> </change> </input> <input type="time" token="time_token" searchWhenChanged="true"> <label>Time Range</label> <default> <earliest>-60m@m</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <title>Panel 1</title> <table> <title>base_token: $base_token$</title> <search> <query>$base_token$ | eval do_i_get_A = if(match(message, "\bA$"), "Yes!", "naha")</query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Panel 2</title> <table> <title>base_token: $base_token$</title> <search> <query>$base_token$ | eval do_i_get_B = if(match(message, "\bB$"), "Yes!", "naha")</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>   Here arethe  three selections:   Note your original code has a syntax error in the case function.  But even without, tokens cannot be used to set XML attribute.  That's why it would not work as desired.  The above dashboard instead sets token value to the search string.  In the strict sense, this is inline search not chain search.  If you have many panels, the same search might be executed too many times.  If performance is very important, there may be a way to set token value to base searches' sid and use loadjob command.
Hey @gcusello ! Removing this option from my tcpout stanza would cause, that everything else being logged to my indexer, would not be sent anymore by my heavyforwarder. My main issue is that my thi... See more...
Hey @gcusello ! Removing this option from my tcpout stanza would cause, that everything else being logged to my indexer, would not be sent anymore by my heavyforwarder. My main issue is that my third-party host gets sent everything from my sourcetype kube_audit instead only a specific part (which should include everything matching with my regex).  So I have a setup, where my heavyforwarder sends a lot to my indexers in my environment, but now for security purposes, we want to send a specific part through the syslogoutputprocessor to a third-party host, which should receive it on port 514 via UDP. Instead of respecting my regex defined in transforms.conf, it sends everything regarding the sourcetype kube_audit defined in my props.conf (what you would expect if "REGEX = (.)" would do). Any other way you fixed it? Thanks for helping
嗨,兄弟, 我遇到了一个关于 postgres 数据集成的令人困惑的问题。 我想每两分钟执行一次此语句,以便从数据库中获取最新结果,因此我使用以下语句进行筛选。 SELECT history_uint.itemid,history_uint.value,interface.ip,items.name,hosts.host,TO_TIMESTAMP (history_uin... See more...
嗨,兄弟, 我遇到了一个关于 postgres 数据集成的令人困惑的问题。 我想每两分钟执行一次此语句,以便从数据库中获取最新结果,因此我使用以下语句进行筛选。 SELECT history_uint.itemid,history_uint.value,interface.ip,items.name,hosts.host,TO_TIMESTAMP (history_uint.clock) AS clock_datetime FROM history_uint JOIN items ON history_uint.itemid = items.itemid JOIN hosts ON items.hostid = hosts.hostid JOIN interface ON interface.hostid = hosts.hostid WHERE history_uint.clock > (FLOOR(EXTRACT(EPOCH FROM NOW())) - 120) AND items.flags = 4 AND (items.name LIKE '%Bits sent%' OR items.name LIKE '%Bits received%') AND (hosts.host LIKE 'CN%' OR hosts.host LIKE 'os%') ORDER BY history_uint.clock DESC LIMIT 90000; 此 SQL 语句在数据库工具中完美执行 但是,此 SQL 语句无法在 dbxquery 中执行,错误未知 我发现关键原因是以下 SQL 语句: 其中 history_uint.clock > (FLOOR(EXTRACT(EPOCH FROM NOW())) - 120) 当我用 Unix 时间戳替换 1 时,一切都很好 我尝试用其他 SQL 语句替换它,但它们都失败了。 请帮我一起分析这个问题。谢谢。 添加一些环境信息: Postgres 版本:14.9 java 版本 21.0.5 DB connet 版本 3.18.1 postgres JDBC 版本 1.2.1 谢谢    
Hi @LittleFatFish , what's your issue: you don't send logs to the third party syslog or you send all your logs? I experienced both the issues. I solved the first removing: defaultGroup = my_index... See more...
Hi @LittleFatFish , what's your issue: you don't send logs to the third party syslog or you send all your logs? I experienced both the issues. I solved the first removing: defaultGroup = my_indexers in [tcpout] stanza. Ciao. Giuseppe
Hi @victorcorrea , I usually use monitor command for this requirements. I like to use batch command that removes files after reading but I experienced an issue with big files, so, for my knowledge,... See more...
Hi @victorcorrea , I usually use monitor command for this requirements. I like to use batch command that removes files after reading but I experienced an issue with big files, so, for my knowledge, the best solution is monitor. I never used MonitorNoHandle. About the grants, you have to give the grants of reading to the splunk user or to its group. Usually files to read have 644. Ciao. Giuseppe
Hi @hazem , Splunk Stream is a packet capture app, for my knowledge isn't the best solution for DNS logs, I usually use Splunk_TA_Windows add-on. Ciao. Giuseppe
Hi @chandrag , on Splunk Cloud you have few chances for action, open a case to Splunk Cloud Support. It seems that KV-Store is disabled in one component of your architecture and this is a best prac... See more...
Hi @chandrag , on Splunk Cloud you have few chances for action, open a case to Splunk Cloud Support. It seems that KV-Store is disabled in one component of your architecture and this is a best practice for Indexers and Heavy Forwarder, but the Add-On you're using requires KV-Store. Ciao. Giuseppe
In Splunk Cloud for one of my client environment, I'm seeing below message. TA-pps_ondemand Error: KV Store is disabled. Please enable it to start the data collection. Please help me with suitable ... See more...
In Splunk Cloud for one of my client environment, I'm seeing below message. TA-pps_ondemand Error: KV Store is disabled. Please enable it to start the data collection. Please help me with suitable solution.
  Hi Bhumi, For example, with that instruction I am extracting data from msg_old field with the word full in between like: she works successfully for us, but what I need is just data that contain t... See more...
  Hi Bhumi, For example, with that instruction I am extracting data from msg_old field with the word full in between like: she works successfully for us, but what I need is just data that contain the word word "full" itself.  For instance: The house is full.  Hope this can clarify. Thanks.  
Hi ITWhisperer, For example, with that instruction I am extracting data from msg_old field with the word full in between like: she works successfully for us, but what I need is just data that contai... See more...
Hi ITWhisperer, For example, with that instruction I am extracting data from msg_old field with the word full in between like: she works successfully for us, but what I need is just data that contain the word word "full" itself.  For instance: The house is full.  Hope this can clarify. Thanks.  
Thank you very much. This is working to me and feel a bit faster than before.
Hey all, 2 part question here... I'm using the MLTK Smart Forecasting tool to model hard drive free space (time vs hard drive space). Currently the y-axis (i.e. free space %/free space MB) is auto... See more...
Hey all, 2 part question here... I'm using the MLTK Smart Forecasting tool to model hard drive free space (time vs hard drive space). Currently the y-axis (i.e. free space %/free space MB) is automatically adjusting its range to the graph produced by the forecast. I would want it to instead run from 0-100 (in the case of Free Space %) and be something reasonable for the Free Space MB-line. By extension, how would I get the graph to "tell or show" me where the x-intercept would be from the prediction and/or confidence interval (i.e. tell me the date the hard drive would theoretically run out of space; where Free Space %/MB = 0)? Attached is an image of the current output visualization should this help.