All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @sekhar463, at first probaby you don't need crcSalt option, so, try without. Anyway, the syntax is fixed, <SOUCE> isn't a variable, you have to use it as is: crcSalt = <SOURCE>  Ciao. Giuseppe
H @sekhar463, what's your issue? Anyway, load this sourcetype both on the UF and on the Search Head. Ciao. Giuseppe
i have added this file in monitoring to ingest data but data is not getting ingesting  log file path is /tmp/mountcheck.txt [monitor:///tmp/mount.txt] disabled = 0 index = Test_index sourcetype ... See more...
i have added this file in monitoring to ingest data but data is not getting ingesting  log file path is /tmp/mountcheck.txt [monitor:///tmp/mount.txt] disabled = 0 index = Test_index sourcetype =Test_sourcetype initCrcLen = 1024 crcSalt = "unique_salt_value"  
i have below stanza to ingest json data file and added in deployment server as below an in HF added props.conf file  initially  i have uploaded using splunk UI but getting events in one line [mon... See more...
i have below stanza to ingest json data file and added in deployment server as below an in HF added props.conf file  initially  i have uploaded using splunk UI but getting events in one line [monitor:///var/log/Netapp_testobject.json] disabled = false index = Test_index sourcetype = Test_sourcetype [Test_sourcetype] DATETIME_CONFIG=CURRENT SHOULD_LINEMERGE=false LINE_BREAKER=([{}\,\s]+) NO_BINARY_CHECK=true CHARSET=UTF-8 EVENT_BREAKER=([{}\,\s]+) INDEXED_EXTRACTIONS=json KV_MODE=json TRUNCATE=0 json data looks like below: [ { "Name": "test name", "Description": "", "DNSHostname": "test name", "OperatingSystem": "NetApp Release 9.1", "WhenCreated": "2/13/2018 08:24:22 AM", "distinguishedName": "CN=test name,OU=NAS,OU=AVZ Special Purpose,DC=corp,DC=amvescap,DC=net" }, { "Name": "test name", "Description": "London DR smb FSX vserver", "DNSHostname": "test name", "OperatingSystem": "NetApp Release 9.13.0P4", "WhenCreated": "11/14/2023 08:43:36 AM", "distinguishedName": "CN=test name,OU=NAS,OU=AVZ Special Purpose,DC=corp,DC=amvescap,DC=net" } ]
Hi @indeed_2000, whe you say "splunk apm" are you meaning the Splunk installation file or what else? if the Spluk installation file, it depends on your operative system and you can follow the istru... See more...
Hi @indeed_2000, whe you say "splunk apm" are you meaning the Splunk installation file or what else? if the Spluk installation file, it depends on your operative system and you can follow the istructions at https://docs.splunk.com/Documentation/Splunk/9.1.2/Installation/Whatsinthismanual You can download the installer from https://www.splunk.com/en_us/download/splunk-enterprise.html Ciao. Giuseppe
Hi @HankinAlex, at firt the port you are usig is unusual, the default port for UF to IDX is 9997. Anyway: did you configured your IDX to receive logs from UFs on this port [Settings > Forwarding ... See more...
Hi @HankinAlex, at firt the port you are usig is unusual, the default port for UF to IDX is 9997. Anyway: did you configured your IDX to receive logs from UFs on this port [Settings > Forwarding and Receiving > Receiving]? did you configured your UF to send logs to the IDX editing outputs.conf file? You can find detailed instructions at https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/Usingforwardingagents Ciao. Giuseppe
Hi, We have created an application using splunk add on builder(https://apps.splunk.com/app/2962/). Created a python script for the alert action. while validating the application created(add-on) we ... See more...
Hi, We have created an application using splunk add on builder(https://apps.splunk.com/app/2962/). Created a python script for the alert action. while validating the application created(add-on) we are getting two errors rest of the test cases are passed.  Sharing the errors below- First Error: {"validation_id": "v_1703053121_88", "ta_name": "TA-testaddon", "rule_name": "Validate app certification", "category": "app_cert_validation", "ext_data": {"is_visible": true}, "message_id": "7004", "description": "Check that no files have *nix write permissions for all users (xx2, xx6, xx7). Splunk recommends 644 for all app files outside of the bin/ directory, 644 for scripts within the bin/ directory that are invoked using an interpreter (e.g. python my_script.py or sh my_script.sh), and 755 for scripts within the bin/ directory that are invoked directly (e.g. ./my_script.sh or ./my_script). Since appinspect 1.6.1, check that no files have nt write permissions for all users..", "sub_category": "Source code and binaries standards", "solution": "There are multiple errors for this check. Please check \"messages\" for details.", "messages": "[{\"result\": \"warning\", \"message\": \"Suppressed 813 failure messages\", \"message_filename\": null, \"message_line\": null}, {\"result\": \"failure\", \"message\": \"A posix world-writable file was found. File: bin/ta_testaddon/aob_py3/splunktalib/splunk_cluster.py\", \"message_filename\": null, \"message_line\": null}]", "severity": "Fatal", "status": "Fail", "validation_time": 1703053540}   Second Error: {"validation_id": "v_1703053679_83", "ta_name": "TA-testaddon", "rule_name": "Validate app certification", "category": "app_cert_validation", "ext_data": {"is_visible": true}, "message_id": "7002", "description": "Check that the dashboards in your app have a valid version attribute.", "sub_category": "jQuery vulnerabilities", "solution": "Change the version attribute in the root node of your Simple XML dashboard default/data/ui/views/home.xml to `<version=1.1>`. Earlier dashboard versions introduce security vulnerabilities into your apps and are not permitted in Splunk Cloud File: default/data/ui/views/home.xml", "severity": "Fatal", "status": "Fail", "validation_time": 1703053994} Kindly help in resolving this
Hi @nyajoefit22, you shoud try to use a regex not only for the TaskCategory field but for al the rule, something like this:   blacklist7 = EventCode\s*\=\s*4769.*TaskCategory\=\w+\s\w+\s\w+\s\w+ ... See more...
Hi @nyajoefit22, you shoud try to use a regex not only for the TaskCategory field but for al the rule, something like this:   blacklist7 = EventCode\s*\=\s*4769.*TaskCategory\=\w+\s\w+\s\w+\s\w+   I could be more detailed if you can share a sample of your logs. You can find many answer to this question in Community. Ciao. Giuseppe
Hi @zijian, every time I have a crash on one Splunk system I open a case to Splunk Support sending them a diag of the server. Especially when you have so frequent crashes on both two production ind... See more...
Hi @zijian, every time I have a crash on one Splunk system I open a case to Splunk Support sending them a diag of the server. Especially when you have so frequent crashes on both two production indexers because the service continuity is in danger: I'd open a case with priority 2 or 1. Ciao. Giuseppe
Hi @catanoium , As you can read in the last action, the file is inputs.conf. Ciao. Giuseppe  
Hi @aaronbarry73 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @nithys, this is a different question even if on the same search and you can find many answers to this question in Community ans, as always I hint to open a new question in Community to have a fa... See more...
Hi @nithys, this is a different question even if on the same search and you can find many answers to this question in Community ans, as always I hint to open a new question in Community to have a faster and probably better answer. Anyway, you can assign fixed colours to you values by GUI or on the dashboard code: by GUI opening you dashboard in Edit mode and clicking on the pencil on the right top of the panel, then choosing colours. by code customizing for your requirements this code: <option name="charting.fieldColors">{"Total":"Total",0x333333,"400":0xd93f3c,"200Healthy":0x65a637}</option> Ciao. Giuseppe
Hello, is there a way to see the full URL of a particular slow Transaction Snapshot? I believe that some of the slow search requests in our system could be caused by a specific user input that is a p... See more...
Hello, is there a way to see the full URL of a particular slow Transaction Snapshot? I believe that some of the slow search requests in our system could be caused by a specific user input that is a part of the dynamic URL. But in the Transaction Snapshot dashboard (or in the Transaction Snapshot overview), I only see the aggregated short URL without a user input. Full URL example: https://host/Search/userInput Transaction Snapshot dashboard: Individual transaction overview: Also, I don't think I have access to the Analytics dashboard.
Hi @gcusello  With the provided query i am able to get a column chart  which shows total no of request,200 statuscode,400 statuscode,500 statuscode.But how can i show 200 as green ,400 as orange,500... See more...
Hi @gcusello  With the provided query i am able to get a column chart  which shows total no of request,200 statuscode,400 statuscode,500 statuscode.But how can i show 200 as green ,400 as orange,500 as red... Tried below option inside the source but unable to get the colors in column chart... <option name="charting.chart">column</option>         <option name="charting.chart.sliceCollapsingThreshold">0.01</option>         <option name="charting.drilldown">all</option>         <option name="charting.fieldColors">{"200":0xFF0000,201:0x33ff00,204:0x66ff00,303:0xffaa00,304:0xffff00,404:0xff0000}</option>         <option name="charting.legend.placement">right</option>
@dtburrows3  Thank you for your help. It work!
Hi, I have two clustered indexers which are now constantly generating crash logs in /splunk/var/log/splunk every few minutes and is unable to figure out the cause from the crash log or the error in ... See more...
Hi, I have two clustered indexers which are now constantly generating crash logs in /splunk/var/log/splunk every few minutes and is unable to figure out the cause from the crash log or the error in splunkd.log. Would anyone here be able to shed some light on this? Splunkd Error: WARN SearchProcessRunner [19356 PreforkedSearchesManager-0] - preforked process=0/38 status=killed, signum=6, signame="Aborted", coredump=1, uptime_sec=37.282768, stime_sec=19.850199, max_rss_kb=472688, vm_minor=902282, vm_major=37, fs_r_count=608, fs_w_count=50856, sched_vol=3413, sched_invol=10923 Contents of one of the crash.log: [build b6436b649711] 2023-11-02 11:39:40 Received fatal signal 6 (Aborted) on PID 23624. Cause: Signal sent by PID 23624 running under UID 1001. Crashing thread: BucketSummaryActorThread Registers: RIP: [0x00007F0D7E2DA387] gsignal + 55 (libc.so.6 + 0x36387) RDI: [0x0000000000005C48] RSI: [0x00000000000059CC] RBP: [0x0000000000000BE7] RSP: [0x00007F0CF85F2268] RAX: [0x0000000000000000] RBX: [0x0000562A9ADF7598] RCX: [0xFFFFFFFFFFFFFFFF] RDX: [0x0000000000000006] R8: [0x00007F0CF85FF700] R9: [0x00007F0D7E2F12CD] R10: [0x0000000000000008] R11: [0x0000000000000206] R12: [0x0000562A9AC0E070] R13: [0x0000562A9AF9CFB0] R14: [0x00007F0CF85F2420] R15: [0x00007F0CF806F260] EFL: [0x0000000000000206] TRAPNO: [0x0000000000000000] ERR: [0x0000000000000000] CSGSFS: [0x0000000000000033] OLDMASK: [0x0000000000000000] Regards, Zijian
You may need to specify the "total_time" field as the field to return descriptive statistic on instead of using it as a by-field in this search. Something like this, index=idx_prd_analysis so... See more...
You may need to specify the "total_time" field as the field to return descriptive statistic on instead of using it as a by-field in this search. Something like this, index=idx_prd_analysis sourcetype="type:prd_analysis:result" corp="AUS" | eval total_time='End_time'-'Start_time' | stats median(total_time) as median, min(total_time) as min, max(total_time) as max, p25(total_time) as lowerquartile, p75(total_time) as upperquartile | eval iqr='upperquartile'-'lowerquartile', scalar=1.5, lowerwhisker='median'-('scalar'*'iqr'), upperwhisker='median'+('scalar'*'iqr')  
I try to make box plot graph using <viz> However, My code have this error, "Error in 'stats' command: The number of wildcards between field specifier '*' and rename specifier 'lowerquartile' do not... See more...
I try to make box plot graph using <viz> However, My code have this error, "Error in 'stats' command: The number of wildcards between field specifier '*' and rename specifier 'lowerquartile' do not match. Note: empty field specifiers implies all fields, e.g. sum() == sum(*)" and My code is this <viz type="viz_boxplot_app.boxplot"> <search> <query>index=idx_prd_analysis sourcetype="type:prd_analysis:result" corp="AUS" | eval total_time = End_time - Start_time | stats median, min, max, p25 AS lowerquartile, p75 AS upperquartile by total_time | eval iqr=upperquartile-lowerquartile | eval lowerwhisker=median-(1.5*iqr) | eval upperwhisker=median+(1.5*iqr) </query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="drilldown">all</option> <option name="refresh.display">progressbar</option> </viz>  I don't use any "eval" or string words at the "stats", But it happend. How could I solve this problem?   
That's an awesome explanation @NullZero.... We are facing similar issues, but sort of different way... We have 2 node Search Head Cluster... among which one is static captain... another one is a m... See more...
That's an awesome explanation @NullZero.... We are facing similar issues, but sort of different way... We have 2 node Search Head Cluster... among which one is static captain... another one is a member. Often the non-captain member goes out of cluster (It is not showing in the Search head clustering page).. every time we are manually restarting the Splunk or the entire EC2 of the member.. then it is showing in the cluster page.... Can i use the re-sync command to solve the issue, instead of restarting the Splunk or EC2? will it help? Thanks for your help 
I'm sorry it's hard to read because I don't understand English and I'm using a translation app. Currently, I am not able to "Premise follow-up" reports in Splunk. Subsequent processing is started w... See more...
I'm sorry it's hard to read because I don't understand English and I'm using a translation app. Currently, I am not able to "Premise follow-up" reports in Splunk. Subsequent processing is started with a margin of time for the completion time of the prerequisite process. However, with this method, there is a risk that subsequent processing will start before the premise process is completed. You have a lot of reports to process, and you don't want to extend the schedule interval. Does anyone know a solution to this challenge?