All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @elend , you can add the Time tokens that you passed to the earliest and latest fields: in the secondary dashboard, if the Time tokens are called $earliest$ and $latest$: index=your_index earli... See more...
Hi @elend , you can add the Time tokens that you passed to the earliest and latest fields: in the secondary dashboard, if the Time tokens are called $earliest$ and $latest$: index=your_index earliest=$earliest$ latest=$latest$ | ...  Ciao. Giuseppe
Hi @sarlacc , good for you, see next time! Please acceptyour last message to help other people of Coomunity to find the right solution. Ciao and happy splunking Giuseppe P.S.: Karma Points are a... See more...
Hi @sarlacc , good for you, see next time! Please acceptyour last message to help other people of Coomunity to find the right solution. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @nabeel652 , for my knowledge, you can change the colour of the background or of the text based on the value of the field, but I don't think that's possible to change both of them. Ciao. Giuseppe
First off, the phrase "doesn't work" conveys little information in the best of cases and should be banished.  Describe your data and illustrate the output, then explain why the output is different fr... See more...
First off, the phrase "doesn't work" conveys little information in the best of cases and should be banished.  Describe your data and illustrate the output, then explain why the output is different from desired output unless it is painfully obvious. (See my Four Commandments below.) Back to your search.  You already say that search does not meet your requirement.  Why insist on using append?  To get unique buildings in index events, you lookup any matching value, then exclude those matching events.  What is left are events with unmatched buildings.  Not only is this approach more semantic, but using lookup is also more efficient because that's a binary tree search. About that roomlookup_buildings.csv, have you defined a lookup to use this file?  In Splunk, a lookup definition can be independent of lookup file, meaning you need a definition. (The lookup definition doesn't have the use the same name as the file, but must use the file as source.  My convention is to name a lookup without .csv but that's up to you.  I will assume that your definition is called roomlookup_buildings.csv.) Are the column buildings containing one value per row? (I will assume yes.  There is no good reason not to.) What are those escaped quotation marks?  Are they part of field value or do you simply use them to signal that between quotes are the values? (I will assume the values are between quotes.) If you have already defined a lookup, let's also call it roomlookup_buildings.csv; and let's assume that each row contains one value for building, i.e., buildings Aachen 1 Almanor 1 Almanor 2 Antara Further assume that your index search has these events: building_from_search1 request_unique_id Aachen 1 ID 1 Almanor 1 ID 2 Almanor 2 ID 2 Amsterdam ID 3 Then, you run     | lookup roomlookup_buildings.csv buildings as building_from_search1 output buildings as matching_building     This should give you building_from_search1 matching_building request_unique_id Aachen 1 Aachen 1 ID 1 Almanor 1 Almanor 1 ID 2 Almanor 2 Almanor 2 ID 2 Amsterdam   ID 3 Apply the filter,     | lookup roomlookup_buildings.csv buildings as building_from_search1 output buildings as matching_building | where isnull(matching_building)     This results in building_from_search1 matching_building request_unique_id Amsterdam   ID 3 Then, apply stats to the whole thing     index= buildings_core "Buildings updated in database*" | rex "REQUEST_UNIQUE_ID:(?<request_unique_id>[^ ]+)" | rex "Buildings updated in database:\s(?<buildings>\{[^}]+\})" | eval buildings = replace(buildings, "[{}]", "") | eval buildings = split(buildings, ",") | mvexpand buildings | eval building_from_search1 = mvindex(split(buildings, ":"), 1) | lookup roomlookup_buildings.csv buildings as building_from_search1 output buildings as matching_building | where isnull(matching_building) | stats values(building_from_search1) as unmatching_buildings by request_unique_id     That mock data gives request_unique_id unmatching_buildings ID 3 Amsterdam Is this what you expect from that mock data? Here, I am illustrating four golden rules of asking an answerable question in data analytics, which I call Four Commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search that volunteers here do not have to look at. Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious. Here is an emulation for you to play with and compare with real data.  This emulation is used to generate the above mock data.  If your real data (including lookup) is different, you need to carefully describe them.     | makeresults format=csv data="building_from_search1, request_unique_id Aachen 1, ID 1 Almanor 1, ID 2 Almanor 2, ID 2 Amsterdam, ID 3" ``` the above emulates index= buildings_core "Buildings updated in database*" | rex "REQUEST_UNIQUE_ID:(?<request_unique_id>[^ ]+)" | rex "Buildings updated in database:\s(?<buildings>\{[^}]+\})" | eval buildings = replace(buildings, "[{}]", "") | eval buildings = split(buildings, ",") | mvexpand buildings | eval building_from_search1 = mvindex(split(buildings, ":"), 1) ```      
See this example - I assume the colour was #ffc7c0. Set the block colours as needed then the token+css handling to get the text colour change. <html depends="$hidden$"> <style> ... See more...
See this example - I assume the colour was #ffc7c0. Set the block colours as needed then the token+css handling to get the text colour change. <html depends="$hidden$"> <style> #result_viz text { fill: $result_foreground$ !important; } </style> </html> <single id="result_viz"> <title>Value &gt;0 colour #ffc7c0 or &lt;=0 #c6efce</title> <search> <query>| makeresults | eval value=(random() % 100) - 50 </query> <done> <eval token="result_foreground">if($result.value$&gt;0, "#9c0006", "#006100")</eval> </done> </search> <option name="colorMode">block</option> <option name="drilldown">all</option> <option name="height">60</option> <option name="rangeColors">["0xc6efce","0xffc7c0"]</option> <option name="rangeValues">[1]</option> <option name="useColors">1</option> </single>  
I want to use SSO and reverse proxy to skip the login page and go directly to the service app page. I found several resources and created a setup as shown below, but it doesn't skip the login when a... See more...
I want to use SSO and reverse proxy to skip the login page and go directly to the service app page. I found several resources and created a setup as shown below, but it doesn't skip the login when accessing those addresses. The environment is as follows Ubuntu 20.04.6 Nginx 1.18 Splunk 8.2.9 Is it possible to implement login skipping with this configuration alone? Or is this possible with additional authentication services such as ldap or IIS authentication, SAML, etc? If so, what additional areas of the above setup should we be looking at?    web.conf   [settings] SSOMode = strict trustedIP = 127.0.0.1,192.168.1.142,192.168.1.10 remoteUser = REMOTEUSER tools.proxy.on = true root_endpoint = / enableWebDebug=true     server.conf   [general] serverName = dev-server sessionTimeout = 24h trustedIP = 127.0.0.1 [settings] remoteUser = REMOTEUSER   nginx.conf   server { listen 8001; server_name splunkweb; location / { proxy_pass http://192.168.1.10:8000/; proxy_redirect / http://192.168.1.10:8000/; proxy_set_header REMOTEUSER admin; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } }    
ok , i have tried it and it does not work
If you are passing tokens from dashboard A to dashboard B that are inputs in dashboard B, then use &form.token_name=bla where token_name is the name of your token in dashboard B .
Oh, apologies hahaha  Yeah, I tried using dev tools to see which REST endpoint WebUI uses. Not for exporting to CSV but for exporting to pdf (uses pdfgen endpoint). But can't find Splunk documenta... See more...
Oh, apologies hahaha  Yeah, I tried using dev tools to see which REST endpoint WebUI uses. Not for exporting to CSV but for exporting to pdf (uses pdfgen endpoint). But can't find Splunk documentation on it anywhere and other forums don't seem to have a working solution. Anyways thanks for recommendation. Will raise a case with support for now.   
Hi @yuanliu, Sorry, but the previous approach hasn’t worked for me. Let me provide the full context with the entire query. I am trying to compare building names from two sources: an indexed search a... See more...
Hi @yuanliu, Sorry, but the previous approach hasn’t worked for me. Let me provide the full context with the entire query. I am trying to compare building names from two sources: an indexed search and a lookup file. For example, the building_from_search1 values from the indexed search are: \"Aachen 1\" \"Almanor 1\" \"Almanor 2\" \"Amsterdam\" The lookup file, which has a column named buildings, contains values like: \"Aachen 1\" \"Almanor 1\" \"Almanor 2" \"Antara" Currently, I am using the mvappend command to combine both sets and filter for values with a count of 1. However, this approach gives me unique values from both searches, not just the unique values from the indexed search. The target is to print unique values from the indexed search only. In this example, "Amsterdam" should be included in the result, but I am currently getting both "Amsterdam" and "Antara." Here is the query I am using: index= buildings_core "Buildings updated in database*" | rex "REQUEST_UNIQUE_ID:(?<request_unique_id>[^ ]+)" | rex "Buildings updated in database:\s(?<buildings>\{[^}]+\})" | eval buildings = replace(buildings, "[{}]", "") | eval buildings = split(buildings, ",") | mvexpand buildings | eval building_from_search1 = mvindex(split(buildings, ":"), 1) | stats values(building_from_search1) as buildings_from_search1 by request_unique_id | append [ | inputlookup roomlookup_buildings.csv | stats values(buildings) as buildings_from_search2 ] | eval all_buildings = mvappend(buildings_from_search1, buildings_from_search2) | stats count by all_buildings | where count = 1 | stats values(all_buildings) as all_buildings | eval source="buildings_lacking_timezone_data" | table source, all_buildings
Yes. I know that &amp; is a HTML entity. Hence the smiley. Anyway. If exporting from WebUI works OK and the REST-initiated export does not there are two things you can do: 1) As I mentioned - raise... See more...
Yes. I know that &amp; is a HTML entity. Hence the smiley. Anyway. If exporting from WebUI works OK and the REST-initiated export does not there are two things you can do: 1) As I mentioned - raise a case with support. It seems like a bug. A proper CSV should be properly quoted/escaped/whatever. 2) You can use developer tools to check which REST endpoint the WebUI uses
@Siddharthnegi- I'm saying not to change, change just to see if it works or not. And this will tell you if that also does not work meaning, its the issue with how you are editing the file or the file... See more...
@Siddharthnegi- I'm saying not to change, change just to see if it works or not. And this will tell you if that also does not work meaning, its the issue with how you are editing the file or the file location or something like that. I asked just to validate whether its the issue with how you are editing the file or a Splunk bug.   I hope this helps!!!
Hello Splunkers In Single Value viz I know we can change text colour or background one at a time but I have a requirement to control both text and background colour in a single value visualisati... See more...
Hello Splunkers In Single Value viz I know we can change text colour or background one at a time but I have a requirement to control both text and background colour in a single value visualisation for example IF result > 0       Text: #9c0006       Background: #ffc7c ELSE     Text: #006100     Background: #c6efce I'm using Splunk cloud so don't have the option to use JavaScript. Simple CSS solution is needed.     Any help will be appreciated   
how about set the time value on the linked dashboard?. If i delete the time range on linked dashboard (B), the visualization is wait for token. But if i add time range there, the global time from the... See more...
how about set the time value on the linked dashboard?. If i delete the time range on linked dashboard (B), the visualization is wait for token. But if i add time range there, the global time from the destination dashboard is overwrite it.
"Reports" tab of one of our apps is missing from the Navigation bar as seen in the image below.   Below is the content of default.xml from "local/data/ui/nav" directory. Everything except "Repo... See more...
"Reports" tab of one of our apps is missing from the Navigation bar as seen in the image below.   Below is the content of default.xml from "local/data/ui/nav" directory. Everything except "Reports" tab is in <view> tag but reports is in <collection> tag. Can anyone please help in bringing this report tab back and explain how this collection tag works.  
the &amp is HTML encoding to escape the & character. Exporting to CSV works via the API but when field values are multi-value and within each multi-value there is a comma as part of the data, exporti... See more...
the &amp is HTML encoding to escape the & character. Exporting to CSV works via the API but when field values are multi-value and within each multi-value there is a comma as part of the data, exporting to CSV doesn't work. 
My three cents | where NOT a in b or | where NOT b=a (as you can do with  multivalued fields) is NOT the same as | where a!=b The first form filters out all results where value a appears anywh... See more...
My three cents | where NOT a in b or | where NOT b=a (as you can do with  multivalued fields) is NOT the same as | where a!=b The first form filters out all results where value a appears anywhere in the field b - as one of the values in mulitivalued field whereas the second form keeps all results which have at least one value in field b which is different than a. Also results with empty field b are treated differently.
If you have any news, please update this post. We made a support call to Red Hat without any luck Hopefully its works for you
The fields extracted with REPORT are eztracted in search time so they're not available inindex time for INGEST_EVAL.
Hi @aina.rahman , Thank you for posting on community. To monitor the SQL Server and SQL Agent services and set up email alerts when they are down, you can follow these steps: Enable the Ema... See more...
Hi @aina.rahman , Thank you for posting on community. To monitor the SQL Server and SQL Agent services and set up email alerts when they are down, you can follow these steps: Enable the Email Server if not yet set up.  steps to  Enable the Email Server Set up Related Health Rules, Policies, and Actions to Monitor Services for SQL Server Monitoring, utilize AppDynamics Database Agent: Install Database Agent Monitor Databases and Database Servers Steps: Set Health Rule: On Controller > Databases > Alert & Respond, configure a health rule. Use the metric DB|KPI|Database Availability to monitor whether the SQL Server is running.     Set Policies: Policies: On Controller > Alert & Respond > Policies, create a new policy. Set the Trigger, Health Rule Scope, and Actions. Add action and configure email to specify where to send alert emails.     By following these steps, I received email alerts when health rules are violated (SQL server down/SQL server stopped). for SQL Server Agent Monitoring, utilize AppDynamics Machine Agent. AppDynamics Machine Agent can monitor processes on server. Machine Agent Server Process Metrics Steps: Set up health rule, policy, and action similarly to the database agent above. Notes: When setting up the health rules, under Affected Entities, select Custom Health Rule Type to set process metrics such as memory usage or count in the Critical/Warning area. By following these steps, I received email alerts when the SQL Agent service is down. Other Monitoring Alternatives Below are some possible alternatives to monitor your windows services status: .NET Agent Extension .NET Agent extension to monitor windows services. Reference: .NET Agent Extension Documentation Write your own Machine Agent Extension Machine Agent for custom monitoring of services. Reference: Machine Agent Documentation Hope this help. Regards, Martina