All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@vananhnguyen  For that we need to know what value comes to map it with a color.  We can transpose the result and set the result as column values and set the colors. Please check the following run... See more...
@vananhnguyen  For that we need to know what value comes to map it with a color.  We can transpose the result and set the result as column values and set the colors. Please check the following run anywhere example. makeresults is just used to create a set of dummy data { "visualizations": { "viz_PKMJkTej": { "type": "splunk.column", "options": { "y": "> primary | frameBySeriesNames('count','Critical','Failure','Info','Success')", "seriesColorsByField": { "Critical": "#dc4e41", "Failure": "#f8be34", "Success": "#53a051", "Info": "#0051B5" }, "x": "> primary | seriesByName('count')", "y2": "> primary | frameBySeriesNames('Critical','Failure','Info','Success')" }, "dataSources": { "primary": "ds_Lmyq9G4p" } } }, "dataSources": { "ds_Lmyq9G4p": { "type": "ds.search", "options": { "query": "| makeresults count=100 \n| eval value=random() \n| eval status=case(value%2==0,\"Success\",value%3==0,\"Failure\",value%4==0,\"Warning\",value%5==0,\"Critical\",1==1,\"Info\") \n| stats count by status \n| transpose header_field=status column_name=count" }, "name": "Search_1" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" } }, "layout": { "type": "absolute", "options": { "width": 1440, "height": 960, "display": "auto" }, "structure": [ { "item": "viz_PKMJkTej", "type": "block", "position": { "x": 0, "y": 0, "w": 1010, "h": 300 } } ], "globalInputs": [ "input_global_trp" ] }, "description": "", "title": "Static Colors" }
Helo I have a search query like this: index=test dscip=192.168.1.1 OR dscip=192.168.1.2 ... I would like to search this list of ip based on system-alias in my lookup This is my sample lookup.csv: ... See more...
Helo I have a search query like this: index=test dscip=192.168.1.1 OR dscip=192.168.1.2 ... I would like to search this list of ip based on system-alias in my lookup This is my sample lookup.csv: system-alias system-ip prod 192.168.1.1 dev 192.168.2.2 prod 192.168.1.2   so what a search query should look like if i want to serach only for prod ip`s ?   P
Dear team, May I know why there is no further version has been released for this Splunk Application (Splunk App for Jenkins) since 2020? This is a fantastic App useful for visualising the Jenki... See more...
Dear team, May I know why there is no further version has been released for this Splunk Application (Splunk App for Jenkins) since 2020? This is a fantastic App useful for visualising the Jenkin Build status, Access log and other statistical data.. Could you please check and confirm.. Thanks.
Using left join, you should get 1000 events from the first part of the search (left and outer mean the same thing). The where command would strip out events which didn't match, but you already said t... See more...
Using left join, you should get 1000 events from the first part of the search (left and outer mean the same thing). The where command would strip out events which didn't match, but you already said that the 1000 from the first / left side of the join match with 1000 from the second / right side of the join., so I would not expect it to remove any events.
this is an example to make results : | makeresults format=json data="[{\"browsers\":{\"0123456\":{\"id\":\"0123456\",\"fullName\":\"blahblah\",\"name\":\"blahblah\",\"state\":0,\"lastResult\":{\"suc... See more...
this is an example to make results : | makeresults format=json data="[{\"browsers\":{\"0123456\":{\"id\":\"0123456\",\"fullName\":\"blahblah\",\"name\":\"blahblah\",\"state\":0,\"lastResult\":{\"success\":1,\"failed\":2,\"skipped\":3,\"total\":4,\"totalTime\":5,\"netTime\":6,\"error\":true,\"disconnected\":true},\"launchId\":7}},\"result\":{\"0123456\":[{\"id\":8,\"description\":\"blahblah\",\"suite\":[\"blahblah\",\"blahblah\"],\"fullName\":\"blahblah\",\"success\":true,\"skipped\":true,\"time\":9,\"log\":[\"blahblah\",\"blahblah\"]}]},\"summary\":{\"success\":10,\"failed\":11,\"error\":true,\"disconnected\":true,\"exitCode\":12}}]"
Hello, Cisco add-on v. 2.7.3  slows a lot our Splunk Enterprise production platform when it is activated. The research "index=xxxxx sourcetype=cisco:ios" goes from a few ms on our development platfo... See more...
Hello, Cisco add-on v. 2.7.3  slows a lot our Splunk Enterprise production platform when it is activated. The research "index=xxxxx sourcetype=cisco:ios" goes from a few ms on our development platform to more than 1 hour on our production platform. Do you know if any configuration in the add-on could affect the performances of some operations that could fully depend on the the platform configuration?   Thanks a lot for your suggestions!
Hi Splunkers, I have an issue with a log forwarding from an HF to Splunk Cloud and I need a suggestion about troubleshooting. In this environment, some firewalls has been set to send data to an HF. ... See more...
Hi Splunkers, I have an issue with a log forwarding from an HF to Splunk Cloud and I need a suggestion about troubleshooting. In this environment, some firewalls has been set to send data to an HF. Then, data goes to a Splunk Cloud. So, the global flow is: Firewall's ecosystem -> HF -> Splunk Cloud On HF, a network TCP input has been configured and it works fine: all firewall added until now send correctly data to Cloud. Yesterday, firewall admin has configured a new one to send data to Splunk, but I cannot see logs on our env. So, first of all, I asked to Network Admin a check about log forwarding configuration and all has been done properly. Then, I checked if logs are coming from Firewall to HF; a simple tcpdump on configured port show no reset or other suspicious flags. All packe captured has [P] and [.] flag, with ack. So, data arrives where they are supposed to be collected. So, I checked _internal logs, based on firewall IP; no error are shown by this search. I got logs from metrics.log and license.log (mainy from metrics), but no error message are returned. By the way, when I query configured index and sourcetype (that collect properly logs from other firewalls), I cannot see the new one. I used both IP and hostname of firewall device, but no logs are returned. I thought: it could be possible that data arrive to HF but, then, they don't go to Cloud? But in such a case, I presume some error logs should spawn. And supposing my assumption is correct, I could I check it?
Hello Community, I have a problem with the lastest Enterprise Security Version. In the Security Posture Dashboard, when I drilldown the Top Notable Events, the URL is returned correctly but one thi... See more...
Hello Community, I have a problem with the lastest Enterprise Security Version. In the Security Posture Dashboard, when I drilldown the Top Notable Events, the URL is returned correctly but one thing breaks the whole Drilldown. The URLEncode sometimes is applied twice. So instead of %20 replacing spaces in the rule name that was used in the drilldown instead the URL includes %2520. This breaks the Drilldown and then shows all Rule Names instead. The weirdest Part about it is that independent of the Rule Name clicked it sometimes works and sometimes doesn't.  A Reload through the associated Button on the Incident Review Page also fixes the error but this is still a nuisance in the daily business. I have searched the Web for similar experiences but haven't found anything: My question is if anybody else has the same problem so I can make sure that this is not some error from local files (which I checked, but its always possible I missed something) but something thats broken by default. Im not fond of changing anything in the deeper Code of Enterprise Security but if anybody has a solution to the problem I'd be glad!
@richgalloway @ITWhisperer Here's the query: (index=app* (app=Application source="abc" "eventName=what is your name" *className IN (first*,second*,third*,fouth*)) OR (app=Application1 sourcetyp... See more...
@richgalloway @ITWhisperer Here's the query: (index=app* (app=Application source="abc" "eventName=what is your name" *className IN (first*,second*,third*,fouth*)) OR (app=Application1 sourcetype="music:pqr" source="music/pqr.log" "Random raw msf" "status=COMPLETED" *className IN (first*,second*,third*,fouth*)) OR (source="xyz/eventmanagement/eventmanagement.log" "messages from _raw" name=my_amazon_order OR name=my_shiprocket_order *className IN (first*,second*,third*,fouth*))) OR (app=Application2 "raw message" source="aaa/orderdetailsave/orderdetailsave.log" **className IN (first*,second*,third*,fouth*)) earliest=$time.earliest$ latest=$time.latest$ | dedup field1 | eval component="FirstComp" | join field1 type=outer [ search index=index1 index1=main sourcetype="log4j:*" source="/var/log/*/random.log" host="host1*" | dedup field1 | eval component= "secondcomp" | eval field2=field1 ] | where isnull(field2) | table field1 The problem statement is that Fist component has 1000 events whereas second component has 2000 events. While using inner join, both the components have 1000 common events. While using left join, the result should be 0 but I'm getting those 1000 events which are visible while using inner join. Also, query structure needs to be the same due to some prior JS changes. 
@pevniacik , makeresults are for me to create dummy events. It does not matter which search or values you use. What we need to look is whether the logic works for you. In this case, does the logic p... See more...
@pevniacik , makeresults are for me to create dummy events. It does not matter which search or values you use. What we need to look is whether the logic works for you. In this case, does the logic provided in the sample works for you with the checkbox and text input? If not, based on the sample dashboard what changes you foresee?
I see different forwarders count using the following different ways: Looking at the forwarder management at the license master Looking at the Forwarders:Deployment at the license master Looking a... See more...
I see different forwarders count using the following different ways: Looking at the forwarder management at the license master Looking at the Forwarders:Deployment at the license master Looking at  dmc_forwarder_assets.csv inside /opt/splunk/etc/apps/splunk_monitoring_console/lookups/  at the license master So, which one should I guarantee and is there any better way?
Are there any good project ideas. I just started creating dashboard for our network team. I am trying to get more security-based projects and was wondering if there are any good ideas to help me get ... See more...
Are there any good project ideas. I just started creating dashboard for our network team. I am trying to get more security-based projects and was wondering if there are any good ideas to help me get into security.  Very new to Splunk!
That looks like 4 different events rather than 3.  Please confirm. Please share the props.conf settings for that sourcetype.
I am using the SolarWinds plugin, and I want to be able to click the device and it takes me to the device page in SolarWinds.  I have taken the link from the SolarWinds and added the $LinkToken$ to ... See more...
I am using the SolarWinds plugin, and I want to be able to click the device and it takes me to the device page in SolarWinds.  I have taken the link from the SolarWinds and added the $LinkToken$ to the end but it is not taking me there.  Any advice on handling this? I am creating a dashboard using Splunk studio.
Hi   We are trying to integrate the data which is on Splunk to ELK, Using Heavy forwarder can anyone suggest how inputs.conf can be configured so that it listens to the data which is on search head... See more...
Hi   We are trying to integrate the data which is on Splunk to ELK, Using Heavy forwarder can anyone suggest how inputs.conf can be configured so that it listens to the data which is on search head and then using outputs.conf we can send the data to ELK via stash   Thanks 
Regarding Splunk Enterprise together with the Splunk Operator on Kubernetes. What would be the best way to disable the health probes so i can shutdown splunk and leave it shutted down? Having the h... See more...
Regarding Splunk Enterprise together with the Splunk Operator on Kubernetes. What would be the best way to disable the health probes so i can shutdown splunk and leave it shutted down? Having the health probes is very nice but comes down to a pain when doing maintenance on the single splunk pods. Any ideas?
Hi @Splunkerninja, You can use below query; index=_internal source=*license_usage.log* type="Usage" | timechart span=1d eval(round(sum(b)/1024/1024/1024,3)) as GB by idx  
@scelikok Yes I tried out with - |where AA IN ('00','10') AND BB IN ('00','10') But it was not giving any output, but the second one did worked :0   Thanks
The main idea is to have a graceful shutdown/start and run the necessary commands for the cluster’s, had a look at your high-level steps yours looks OK, I did something a while ago, in terms of stopp... See more...
The main idea is to have a graceful shutdown/start and run the necessary commands for the cluster’s, had a look at your high-level steps yours looks OK, I did something a while ago, in terms of stopping a Splunk cluster environment and bring them up. By shutting down the data forwarding tier first is a good idea , otherwise the data will be lost nowhere to go. Place the CM in maintenance mode Shutdown Deployment Server / HF ‘s if in use as well Shutdown SHC - take note of the SHC Captain – Stop the SHC Members and Captain Last, make sure they are down. Shutdown Deployer. As the CM should be in maintenance mode  via CM, shutdown shut down the indexers by the way of the normal commands should be fine(/opt/splunk/bin/stop), one at a time and make sure they are down. Shutdown the CM. On the reverse make sure CM  is up and it’s still in maintenance mode, bring all the indexers up and when they are all up  disable the maintenance mode – check status using MC, the replication factors should searchable be green status, so you may have to wait a bit.   Bring back the Deployer back up Then bring the SH's up one by one, ensure the captain is up first, then the other SHC members  and check the others can communicate with it, using SHC clusters commands to check status Bring back the Deployment Server / HF’s Bring back the data forwarding tier. Use the MC to check overall health. I would document all the steps and commands clearly, so you have a process to follow and checkpoint, rather than in an ad-hoc manner due to the many moving parts.   
Hi @man03359, You can use the below syntax; index="idx-some-index" sourcetype="dbx" source="some.*.source" AA IN (00,10) BB IN (00,10)   or index="idx-some-index" sourcetype="dbx" source="some.*.... See more...
Hi @man03359, You can use the below syntax; index="idx-some-index" sourcetype="dbx" source="some.*.source" AA IN (00,10) BB IN (00,10)   or index="idx-some-index" sourcetype="dbx" source="some.*.source" (AA=00 OR AA=10) (BB=00 OR BB=10)