All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Welcome to Splunk O11y Cloud Nickhills !   From your first message, I see that you are deploying the collector as Gateway (on the helm install command, parameter gateway.enabled is set to true). Ca... See more...
Welcome to Splunk O11y Cloud Nickhills !   From your first message, I see that you are deploying the collector as Gateway (on the helm install command, parameter gateway.enabled is set to true). Can you confirm that you need to setup OpenTelemetry as gateway? If not, please try to re-install as agent and let us know how it goes.   Regards, Houssem
The earliest=-30d is there so I can get an average count for each the count for each triggered alert over the past 30 days.  My question is how can I limit those results so I only see records from ye... See more...
The earliest=-30d is there so I can get an average count for each the count for each triggered alert over the past 30 days.  My question is how can I limit those results so I only see records from yesterday, not the other 29 days
Hello, I’m experiencing a connectivity issue when trying to send events to my Splunk HTTP Event Collector (HEC) endpoint. I have confirmed that HEC is enabled, and I am using a valid authorization t... See more...
Hello, I’m experiencing a connectivity issue when trying to send events to my Splunk HTTP Event Collector (HEC) endpoint. I have confirmed that HEC is enabled, and I am using a valid authorization token. Here’s the command I am using: curl -k "https://[your-splunk-instance].splunkcloud.com:8088/services/collector/event" \ -H "Authorization: Splunk [your-token]" \ -H "Content-Type: application/json" \ -d '{"event": "Hello, Splunk!"}' Unfortunately, I receive the following error: curl: (28) Failed to connect to [your-splunk-instance].splunkcloud.com port 8088 after [time] ms: Couldn't connect to server Troubleshooting Steps Taken: Successful Connection from Another User: Notably, another user from a different system was able to successfully use the same curl command to reach the same endpoint Network Connectivity: I verified network connectivity by using ping and received a timeout for all requests. I performed a traceroute and found that packets are lost after the second hop. Despite these efforts, the issue persists. If anyone has encountered a similar issue or has suggestions for further troubleshooting, I would greatly appreciate your help. Thank you!
SPL does not support conditional execution of commands.  It can be simulated in a dashboard by setting a token to the desired search string and referencing the token in the query. <fieldset> <inpu... See more...
SPL does not support conditional execution of commands.  It can be simulated in a dashboard by setting a token to the desired search string and referencing the token in the query. <fieldset> <input token="myInput"...> <change> <condition match="<<option 1>>"> <set token="query">SPL for option 1</set> </condition> <condition match="<<option 2>>"> <set token="query">SPL for option 2></set> </condition> </change> </input> </fieldset> ... <row> <panel> <table> <search> <query>$query$</query> </search> </table> </panel> </row>
Hi gcusello.   Many thanks for your reply and understand that Splunk does not have this fuctionality thus I will need to look at how I can create 2 accounts for an individual whilst trying to use S... See more...
Hi gcusello.   Many thanks for your reply and understand that Splunk does not have this fuctionality thus I will need to look at how I can create 2 accounts for an individual whilst trying to use SSO.  We have Analysts that develop use cases on Enterprize Security and for some of the functionality they need, they need Admin rights, but when they are doing there daya to day role they should remail Annalyst.  the way splunk does it as you says, I can apply both profiles but they get to use whatever they need, within these 2 profiles.   As for the ISO27001 (2022) I am also an Auditor and was looking at the ISO27002 8.2 Privileged Access rights, Guidance para [I] : "only using identities with privileged access rights for undertaking administrative tasks and not for day-to-day general tasks [i.e. checking email, accessing the web (users should have a separate normal network identity for these activities)]." This I read  should also include normal user activity within Splunk. Kind regards Neil
Dashboard panels have recently stopped being a consistent size across the row introducing grey space on our boards. This is happening throughout the app on Splunk Cloud Version:9.2.2403.111. Does an... See more...
Dashboard panels have recently stopped being a consistent size across the row introducing grey space on our boards. This is happening throughout the app on Splunk Cloud Version:9.2.2403.111. Does anyone know of any changes or settings which may have affected this and how it can be resolved?  Thanks  
Sorry for the confusion. So search one gets the result like this. identity email extensionattribute10 extensionattribute11 first last nsurname name.surname@domain.com nsurnameT1@domain.... See more...
Sorry for the confusion. So search one gets the result like this. identity email extensionattribute10 extensionattribute11 first last nsurname name.surname@domain.com nsurnameT1@domain.com name.surname@consultant.com name surname Search two will get all my tickets that was created for people leaving my company and will return results like this _time affect_dest active description dv_state number 2024-10-31 09:46:55 STL Leaver true Leaver Request for Name Surname - 31/10/2024 active INC01 So the only way of searching would by to search the second query's description field where first and last appear
Just to follow up that if we disable the operator, the deployment is successful, but we have no APM. This issue specifically seems to relate to the operator and the APM instrumentation
On a new fresh deployment of O11y, we are following the guide from the setup wizard and running the following helm install command   helm install splunk-otel-collector --set="cloudProvider=aws,dist... See more...
On a new fresh deployment of O11y, we are following the guide from the setup wizard and running the following helm install command   helm install splunk-otel-collector --set="cloudProvider=aws,distribution=eks,splunkObservability.accessToken=xxxx-xxxxx,clusterName=eks-test-cluster,splunkObservability.realm=eu2,gateway.enabled=true,splunkPlatform.endpoint=https://my-splunk-cloud-hec-input,splunkPlatform.token=my-token,splunkObservability.profilingEnabled=true,environment=test,operator.enabled=true,agent.discovery.enabled=true" splunk-otel-collector-chart/splunk-otel-collector   However this fails with the following error: Error: INSTALLATION FAILED: template: splunk-otel-collector/templates/operator/instrumentation.yaml:2:4: executing "splunk-otel-collector/templates/operator/instrumentation.yaml" at <include "splunk-otel-collector.operator.validation-rules" .>: error calling include: template: splunk-otel-collector/templates/operator/_helpers.tpl:17:13: executing "splunk-otel-collector.operator.validation-rules" at <.Values.instrumentation.exporter.endpoint>: nil pointer evaluating interface {}.endpoint This seems to be because  _helpers.tpl is expecting a value for instrumentation.exporter.endpoint however the value according to the chart (and the documentation) is instrumentation.endpoint https://github.com/signalfx/splunk-otel-collector-chart/blob/main/helm-charts/splunk-otel-collector/templates/operator/_helpers.tpl Line 13 is where it is mentioned. We have tried providing instrumentation.exporter.endpoint as an additional parameter - but instead get the error: Values don't meet the specifications of the schema(s) in the following chart(s): splunk-otel-collector: - instrumentation: Additional property exporter is not allowed  (Which is true - instrumentation.exporter.endpoint is not defined in here: https://github.com/signalfx/splunk-otel-collector-chart/blob/main/helm-charts/splunk-otel-collector/templates/operator/instrumentation.yaml ) line 20 We also get the same error if we provide a complete values.yaml file with both formats of the instrumentation endpoint defined.    It looks like _helpers.tpl was edited to include this endpoint specification about a month ago, so surely we can not be the first people to be tripped up by this? Is there anything else I can try or do we need to wait for the operator to be fixed?  
Hi @Wardy1380 , yes, you can assign to a user both Analyst and Admin roles and he/she can use the features that he/she needs for it's job. It isn't possible to choose the role to use, one at a time... See more...
Hi @Wardy1380 , yes, you can assign to a user both Analyst and Admin roles and he/she can use the features that he/she needs for it's job. It isn't possible to choose the role to use, one at a time: you have all the features for the roles that you are enabled; in other words, you cannot have the Analyst role and then decide to elevate himself/helsekf to the Admin role: he/she always has all the features from his/her roles. If you want to separate roles, you have to use two different accounts, one for each role. About ISO27001 (I'm also an Lead Auditor), I'm not sure that there's a requirement that exactly requires to elevate a user role, but only the requisite of "need to know" and Splunk satisfies this requirement. Ciao. Giuseppe
Have you reach out to anyone else, or find an alternate solution? Seems like Splunk support is free lacking in this. 
Does Splunk on Prem or cloud have a solution that allows users to be an Analyst when doing that role and sign in or elevate to Admin when carrying out Admin tasks.  This is something that many standa... See more...
Does Splunk on Prem or cloud have a solution that allows users to be an Analyst when doing that role and sign in or elevate to Admin when carrying out Admin tasks.  This is something that many standards require such as ISO27001 and others.  I hope someone can help or if not I will make the feature request.
Not really. Please provide a sample anonymised event in raw format, in a code block (using the </> button above) to preserve the format of the event. Please indicate which part of the event you want ... See more...
Not really. Please provide a sample anonymised event in raw format, in a code block (using the </> button above) to preserve the format of the event. Please indicate which part of the event you want extracted.
This post is old but unanswered. I did it this way:  index=itsi_grouped_alerts | lookup itsi_notable_group_user_lookup event_identifier_hash as itsi_group_id | search status=1
Hi @LittleFatFish , in this case, check if the sourcetype in props.conf is correct and especially if it's overrided, maybe when the transformation is applied your events still have the original sour... See more...
Hi @LittleFatFish , in this case, check if the sourcetype in props.conf is correct and especially if it's overrided, maybe when the transformation is applied your events still have the original sourcetype. Then obviously (but I'm sure that you already did it) check again the regex in transforma.conf. Ciao. Giuseppe
Hi, brother, I have encountered a confusing problem regarding Postgres data integration. I want to execute this statement every two minutes to retrieve the latest results from the database, so I us... See more...
Hi, brother, I have encountered a confusing problem regarding Postgres data integration. I want to execute this statement every two minutes to retrieve the latest results from the database, so I use the following statement for filtering. SELECT history_uint.itemid, history_uint.value, interface.ip, items.name, hosts.host, TO_TIMESTAMP(history_uint.clock) AS clock_datetime FROM history_uint JOIN items ON history_uint.itemid = items.itemid JOIN hosts ON items.hostid = hosts.hostid JOIN interface ON interface.hostid = hosts.hostid WHERE history_uint.clock > (FLOOR(EXTRACT(EPOCH FROM NOW())) - 120) AND items.flags = 4 AND (items.name LIKE '%Bits sent%' OR items.name LIKE '%Bits received%') AND (hosts.host LIKE 'CN%' OR hosts.host LIKE 'os%') ORDER BY history_uint.clock DESC LIMIT 90000;  This SQL statement executes perfectly in database tools However, this SQL statement cannot be executed in dbxquery, and the error is unknown I found that the key reason is the following SQL statement: Among them, history_uint.clock>(FLOOR (EXTRACT (EPOCH) FROM NOW())) - 120) When I replaced (FLOOR (EXTRACT (EPOCHFAM NOW()) -120) with Unix timestamps, everything was fine I tried to replace it with other SQL statements, but they all failed. Please help me analyze this issue together. thank you. Add some environmental information: Postgres version: 14.9 Java version 21.0.5 DB connet version 3.18.1 Postgres JDBC version 1.2.1 thank you
I have a suspicion that this is another homework assignment as multiple people posted exactly same questions at the same time.  But if there is a real use case, try assign search terms to token direc... See more...
I have a suspicion that this is another homework assignment as multiple people posted exactly same questions at the same time.  But if there is a real use case, try assign search terms to token directly.  Like this:   <form version="1.1"> <label>Pass codebase</label> <description>https://community.splunk.com/t5/Splunk-Search/passing-diffrent-base-search-based-on-inputput-dropdown-values/m-p/703055</description> <search> <query> | makeresults | eval curTime=strftime(now(), "GMT%z") | eval curTime=substr(curTime,1,6) |rename curTime as current_time </query> <progress> <set token="time_token_now">$result.current_time$</set> </progress> </search> <fieldset submitButton="false" autoRun="true"> <input type="radio" token="field1"> <label>field1</label> <choice value="All">All</choice> <choice value="M1">M1</choice> <choice value="A2">A2</choice> <change> <eval token="base_token">case("All"==field1, "| makeresults | eval message = \"Give me an A\"", "All"!=field1, "| makeresults | eval message = \"Give me a B\"")</eval> </change> </input> <input type="time" token="time_token" searchWhenChanged="true"> <label>Time Range</label> <default> <earliest>-60m@m</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <title>Panel 1</title> <table> <title>base_token: $base_token$</title> <search> <query>$base_token$ | eval do_i_get_A = if(match(message, "\bA$"), "Yes!", "naha")</query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Panel 2</title> <table> <title>base_token: $base_token$</title> <search> <query>$base_token$ | eval do_i_get_B = if(match(message, "\bB$"), "Yes!", "naha")</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>   Here arethe  three selections:   Note your original code has a syntax error in the case function.  But even without, tokens cannot be used to set XML attribute.  That's why it would not work as desired.  The above dashboard instead sets token value to the search string.  In the strict sense, this is inline search not chain search.  If you have many panels, the same search might be executed too many times.  If performance is very important, there may be a way to set token value to base searches' sid and use loadjob command.
Hey @gcusello ! Removing this option from my tcpout stanza would cause, that everything else being logged to my indexer, would not be sent anymore by my heavyforwarder. My main issue is that my thi... See more...
Hey @gcusello ! Removing this option from my tcpout stanza would cause, that everything else being logged to my indexer, would not be sent anymore by my heavyforwarder. My main issue is that my third-party host gets sent everything from my sourcetype kube_audit instead only a specific part (which should include everything matching with my regex).  So I have a setup, where my heavyforwarder sends a lot to my indexers in my environment, but now for security purposes, we want to send a specific part through the syslogoutputprocessor to a third-party host, which should receive it on port 514 via UDP. Instead of respecting my regex defined in transforms.conf, it sends everything regarding the sourcetype kube_audit defined in my props.conf (what you would expect if "REGEX = (.)" would do). Any other way you fixed it? Thanks for helping
嗨,兄弟, 我遇到了一个关于 postgres 数据集成的令人困惑的问题。 我想每两分钟执行一次此语句,以便从数据库中获取最新结果,因此我使用以下语句进行筛选。 SELECT history_uint.itemid,history_uint.value,interface.ip,items.name,hosts.host,TO_TIMESTAMP (history_uin... See more...
嗨,兄弟, 我遇到了一个关于 postgres 数据集成的令人困惑的问题。 我想每两分钟执行一次此语句,以便从数据库中获取最新结果,因此我使用以下语句进行筛选。 SELECT history_uint.itemid,history_uint.value,interface.ip,items.name,hosts.host,TO_TIMESTAMP (history_uint.clock) AS clock_datetime FROM history_uint JOIN items ON history_uint.itemid = items.itemid JOIN hosts ON items.hostid = hosts.hostid JOIN interface ON interface.hostid = hosts.hostid WHERE history_uint.clock > (FLOOR(EXTRACT(EPOCH FROM NOW())) - 120) AND items.flags = 4 AND (items.name LIKE '%Bits sent%' OR items.name LIKE '%Bits received%') AND (hosts.host LIKE 'CN%' OR hosts.host LIKE 'os%') ORDER BY history_uint.clock DESC LIMIT 90000; 此 SQL 语句在数据库工具中完美执行 但是,此 SQL 语句无法在 dbxquery 中执行,错误未知 我发现关键原因是以下 SQL 语句: 其中 history_uint.clock > (FLOOR(EXTRACT(EPOCH FROM NOW())) - 120) 当我用 Unix 时间戳替换 1 时,一切都很好 我尝试用其他 SQL 语句替换它,但它们都失败了。 请帮我一起分析这个问题。谢谢。 添加一些环境信息: Postgres 版本:14.9 java 版本 21.0.5 DB connet 版本 3.18.1 postgres JDBC 版本 1.2.1 谢谢    
Hi @LittleFatFish , what's your issue: you don't send logs to the third party syslog or you send all your logs? I experienced both the issues. I solved the first removing: defaultGroup = my_index... See more...
Hi @LittleFatFish , what's your issue: you don't send logs to the third party syslog or you send all your logs? I experienced both the issues. I solved the first removing: defaultGroup = my_indexers in [tcpout] stanza. Ciao. Giuseppe