All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @kgiri253 , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Poi... See more...
Hi @kgiri253 , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Thanks @gcusello the documentation that you shared helped to resolve this issue. By default the above mentioned limit is 500 and the reports are mentioned in lexographical order. Our report was ... See more...
Thanks @gcusello the documentation that you shared helped to resolve this issue. By default the above mentioned limit is 500 and the reports are mentioned in lexographical order. Our report was starting from "S" which was going over and above 500. We have increased the limit now to 1000 which worked for us. Thanks again for prompt reply
That's sad. Are there other tools that help me with analysis? And what do you advise
That's sad. Are there other tools that help me with analysis? And what do you advise
Hi @elend , there's no utility to pass a token to a report because a report is useful if there are no parameters. If you have parameters (tokens) you can use a dashboard as a report, so you can pas... See more...
Hi @elend , there's no utility to pass a token to a report because a report is useful if there are no parameters. If you have parameters (tokens) you can use a dashboard as a report, so you can pass the token from a dashboard to another dashboard, as also described by @bowesmana . Ciao. giuseppe
I dont want to change the time range.
And what is the fix for that? Because this annoying error is messing up with ansible variables. I had to use Splunk UF version 8.x - it works fine. I had other issues on Splunk Enterprise versio... See more...
And what is the fix for that? Because this annoying error is messing up with ansible variables. I had to use Splunk UF version 8.x - it works fine. I had other issues on Splunk Enterprise version 9.x - disappointing
This annoying 'non-impacting' known issue is messing up with my ansible variables under facts.d and eventually all my ansible roles, user creations including splunk user, ldap, etc etc end up in a 'i... See more...
This annoying 'non-impacting' known issue is messing up with my ansible variables under facts.d and eventually all my ansible roles, user creations including splunk user, ldap, etc etc end up in a 'impacting issue' and fatal errors situation. I test it by using Splunk UF version 8.x in my ansible playbooks - everything is working seamlessly and fine. What is the fix for this IMPACTING known issue?
Hi @tuts , I performed a similar search in the past: there's no automatic analysis that you can perform, youcan search for the inbound accesses and outbound transaction, you can only search, with a ... See more...
Hi @tuts , I performed a similar search in the past: there's no automatic analysis that you can perform, youcan search for the inbound accesses and outbound transaction, you can only search, with a security specialist to identify some possible threat or compromise. Ciao. Giuseppe
Scenario: The device has been compromised, and we want to understand how the breach occurred. We have extracted data from the device from the Setup, Security, and Application logs in CSV format and u... See more...
Scenario: The device has been compromised, and we want to understand how the breach occurred. We have extracted data from the device from the Setup, Security, and Application logs in CSV format and uploaded it to Splunk. Question: What is the best way to automatically analyze this data in Splunk and identify any suspicious information
Okay, i thinks its done for it. Then another issue i want to ask is still relate with this tokenization, is it possible to pass token from dashboard to Report?
Hi @kgiri253 , surely the match that you indicated isn't correct, check it. I usually prefer to indicate every report in a line: <saved name="<your_report>" /> you can find more information at ht... See more...
Hi @kgiri253 , surely the match that you indicated isn't correct, check it. I usually prefer to indicate every report in a line: <saved name="<your_report>" /> you can find more information at https://dev.splunk.com/enterprise/docs/developapps/createapps/addnavsplunkapp/ Ciao. Giuseppe
Good to hear your back up and running! It sure does feel like a breaking change/incompatibility between alert manager and Splunk 9.3. Maybe we'll hold off on updating until 9.3.1 All the best
We restored a backup. Splunk is back at version 9.2.2 and everything is working like before. I've checked the Alert Manager before upgrading and is should be compatible with 9.3.0. We will give it... See more...
We restored a backup. Splunk is back at version 9.2.2 and everything is working like before. I've checked the Alert Manager before upgrading and is should be compatible with 9.3.0. We will give it another try in a few weeks. Once again, thanks for your help. Much appreciated.
Hi @elend , you can add the Time tokens that you passed to the earliest and latest fields: in the secondary dashboard, if the Time tokens are called $earliest$ and $latest$: index=your_index earli... See more...
Hi @elend , you can add the Time tokens that you passed to the earliest and latest fields: in the secondary dashboard, if the Time tokens are called $earliest$ and $latest$: index=your_index earliest=$earliest$ latest=$latest$ | ...  Ciao. Giuseppe
Hi @sarlacc , good for you, see next time! Please acceptyour last message to help other people of Coomunity to find the right solution. Ciao and happy splunking Giuseppe P.S.: Karma Points are a... See more...
Hi @sarlacc , good for you, see next time! Please acceptyour last message to help other people of Coomunity to find the right solution. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @nabeel652 , for my knowledge, you can change the colour of the background or of the text based on the value of the field, but I don't think that's possible to change both of them. Ciao. Giuseppe
First off, the phrase "doesn't work" conveys little information in the best of cases and should be banished.  Describe your data and illustrate the output, then explain why the output is different fr... See more...
First off, the phrase "doesn't work" conveys little information in the best of cases and should be banished.  Describe your data and illustrate the output, then explain why the output is different from desired output unless it is painfully obvious. (See my Four Commandments below.) Back to your search.  You already say that search does not meet your requirement.  Why insist on using append?  To get unique buildings in index events, you lookup any matching value, then exclude those matching events.  What is left are events with unmatched buildings.  Not only is this approach more semantic, but using lookup is also more efficient because that's a binary tree search. About that roomlookup_buildings.csv, have you defined a lookup to use this file?  In Splunk, a lookup definition can be independent of lookup file, meaning you need a definition. (The lookup definition doesn't have the use the same name as the file, but must use the file as source.  My convention is to name a lookup without .csv but that's up to you.  I will assume that your definition is called roomlookup_buildings.csv.) Are the column buildings containing one value per row? (I will assume yes.  There is no good reason not to.) What are those escaped quotation marks?  Are they part of field value or do you simply use them to signal that between quotes are the values? (I will assume the values are between quotes.) If you have already defined a lookup, let's also call it roomlookup_buildings.csv; and let's assume that each row contains one value for building, i.e., buildings Aachen 1 Almanor 1 Almanor 2 Antara Further assume that your index search has these events: building_from_search1 request_unique_id Aachen 1 ID 1 Almanor 1 ID 2 Almanor 2 ID 2 Amsterdam ID 3 Then, you run     | lookup roomlookup_buildings.csv buildings as building_from_search1 output buildings as matching_building     This should give you building_from_search1 matching_building request_unique_id Aachen 1 Aachen 1 ID 1 Almanor 1 Almanor 1 ID 2 Almanor 2 Almanor 2 ID 2 Amsterdam   ID 3 Apply the filter,     | lookup roomlookup_buildings.csv buildings as building_from_search1 output buildings as matching_building | where isnull(matching_building)     This results in building_from_search1 matching_building request_unique_id Amsterdam   ID 3 Then, apply stats to the whole thing     index= buildings_core "Buildings updated in database*" | rex "REQUEST_UNIQUE_ID:(?<request_unique_id>[^ ]+)" | rex "Buildings updated in database:\s(?<buildings>\{[^}]+\})" | eval buildings = replace(buildings, "[{}]", "") | eval buildings = split(buildings, ",") | mvexpand buildings | eval building_from_search1 = mvindex(split(buildings, ":"), 1) | lookup roomlookup_buildings.csv buildings as building_from_search1 output buildings as matching_building | where isnull(matching_building) | stats values(building_from_search1) as unmatching_buildings by request_unique_id     That mock data gives request_unique_id unmatching_buildings ID 3 Amsterdam Is this what you expect from that mock data? Here, I am illustrating four golden rules of asking an answerable question in data analytics, which I call Four Commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search that volunteers here do not have to look at. Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious. Here is an emulation for you to play with and compare with real data.  This emulation is used to generate the above mock data.  If your real data (including lookup) is different, you need to carefully describe them.     | makeresults format=csv data="building_from_search1, request_unique_id Aachen 1, ID 1 Almanor 1, ID 2 Almanor 2, ID 2 Amsterdam, ID 3" ``` the above emulates index= buildings_core "Buildings updated in database*" | rex "REQUEST_UNIQUE_ID:(?<request_unique_id>[^ ]+)" | rex "Buildings updated in database:\s(?<buildings>\{[^}]+\})" | eval buildings = replace(buildings, "[{}]", "") | eval buildings = split(buildings, ",") | mvexpand buildings | eval building_from_search1 = mvindex(split(buildings, ":"), 1) ```      
See this example - I assume the colour was #ffc7c0. Set the block colours as needed then the token+css handling to get the text colour change. <html depends="$hidden$"> <style> ... See more...
See this example - I assume the colour was #ffc7c0. Set the block colours as needed then the token+css handling to get the text colour change. <html depends="$hidden$"> <style> #result_viz text { fill: $result_foreground$ !important; } </style> </html> <single id="result_viz"> <title>Value &gt;0 colour #ffc7c0 or &lt;=0 #c6efce</title> <search> <query>| makeresults | eval value=(random() % 100) - 50 </query> <done> <eval token="result_foreground">if($result.value$&gt;0, "#9c0006", "#006100")</eval> </done> </search> <option name="colorMode">block</option> <option name="drilldown">all</option> <option name="height">60</option> <option name="rangeColors">["0xc6efce","0xffc7c0"]</option> <option name="rangeValues">[1]</option> <option name="useColors">1</option> </single>  
I want to use SSO and reverse proxy to skip the login page and go directly to the service app page. I found several resources and created a setup as shown below, but it doesn't skip the login when a... See more...
I want to use SSO and reverse proxy to skip the login page and go directly to the service app page. I found several resources and created a setup as shown below, but it doesn't skip the login when accessing those addresses. The environment is as follows Ubuntu 20.04.6 Nginx 1.18 Splunk 8.2.9 Is it possible to implement login skipping with this configuration alone? Or is this possible with additional authentication services such as ldap or IIS authentication, SAML, etc? If so, what additional areas of the above setup should we be looking at?    web.conf   [settings] SSOMode = strict trustedIP = 127.0.0.1,192.168.1.142,192.168.1.10 remoteUser = REMOTEUSER tools.proxy.on = true root_endpoint = / enableWebDebug=true     server.conf   [general] serverName = dev-server sessionTimeout = 24h trustedIP = 127.0.0.1 [settings] remoteUser = REMOTEUSER   nginx.conf   server { listen 8001; server_name splunkweb; location / { proxy_pass http://192.168.1.10:8000/; proxy_redirect / http://192.168.1.10:8000/; proxy_set_header REMOTEUSER admin; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } }